00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 92 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3270 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.082 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.125 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.988 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.001 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.014 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.014 > git config core.sparsecheckout # timeout=10 00:00:06.025 > git read-tree -mu HEAD # timeout=10 00:00:06.043 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.064 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.064 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.148 [Pipeline] Start of Pipeline 00:00:06.162 [Pipeline] library 00:00:06.164 Loading library shm_lib@master 00:00:06.164 Library shm_lib@master is cached. Copying from home. 00:00:06.181 [Pipeline] node 00:00:06.187 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.190 [Pipeline] { 00:00:06.202 [Pipeline] catchError 00:00:06.203 [Pipeline] { 00:00:06.216 [Pipeline] wrap 00:00:06.228 [Pipeline] { 00:00:06.237 [Pipeline] stage 00:00:06.240 [Pipeline] { (Prologue) 00:00:06.438 [Pipeline] sh 00:00:06.723 + logger -p user.info -t JENKINS-CI 00:00:06.741 [Pipeline] echo 00:00:06.742 Node: GP8 00:00:06.749 [Pipeline] sh 00:00:07.050 [Pipeline] setCustomBuildProperty 00:00:07.063 [Pipeline] echo 00:00:07.065 Cleanup processes 00:00:07.071 [Pipeline] sh 00:00:07.361 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.361 80155 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.377 [Pipeline] sh 00:00:07.663 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.664 ++ grep -v 'sudo pgrep' 00:00:07.664 ++ awk '{print $1}' 00:00:07.664 + sudo kill -9 00:00:07.664 + true 00:00:07.709 [Pipeline] cleanWs 00:00:07.722 [WS-CLEANUP] Deleting project workspace... 00:00:07.722 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.730 [WS-CLEANUP] done 00:00:07.736 [Pipeline] setCustomBuildProperty 00:00:07.755 [Pipeline] sh 00:00:08.041 + sudo git config --global --replace-all safe.directory '*' 00:00:08.166 [Pipeline] httpRequest 00:00:08.206 [Pipeline] echo 00:00:08.208 Sorcerer 10.211.164.101 is alive 00:00:08.218 [Pipeline] httpRequest 00:00:08.223 HttpMethod: GET 00:00:08.224 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.224 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.250 Response Code: HTTP/1.1 200 OK 00:00:08.251 Success: Status code 200 is in the accepted range: 200,404 00:00:08.251 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:31.386 [Pipeline] sh 00:00:31.686 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:31.703 [Pipeline] httpRequest 00:00:31.725 [Pipeline] echo 00:00:31.728 Sorcerer 10.211.164.101 is alive 00:00:31.737 [Pipeline] httpRequest 00:00:31.744 HttpMethod: GET 00:00:31.745 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:31.746 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:31.749 Response Code: HTTP/1.1 200 OK 00:00:31.749 Success: Status code 200 is in the accepted range: 200,404 00:00:31.750 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:48.528 [Pipeline] sh 00:00:48.813 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:52.106 [Pipeline] sh 00:00:52.390 + git -C spdk log --oneline -n5 00:00:52.390 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:52.390 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:52.390 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:52.390 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:52.390 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:52.408 [Pipeline] withCredentials 00:00:52.417 > git --version # timeout=10 00:00:52.428 > git --version # 'git version 2.39.2' 00:00:52.455 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:52.457 [Pipeline] { 00:00:52.467 [Pipeline] retry 00:00:52.469 [Pipeline] { 00:00:52.483 [Pipeline] sh 00:00:52.944 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:52.957 [Pipeline] } 00:00:52.981 [Pipeline] // retry 00:00:52.986 [Pipeline] } 00:00:53.007 [Pipeline] // withCredentials 00:00:53.016 [Pipeline] httpRequest 00:00:53.032 [Pipeline] echo 00:00:53.034 Sorcerer 10.211.164.101 is alive 00:00:53.042 [Pipeline] httpRequest 00:00:53.046 HttpMethod: GET 00:00:53.046 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.047 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.053 Response Code: HTTP/1.1 200 OK 00:00:53.054 Success: Status code 200 is in the accepted range: 200,404 00:00:53.054 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.638 [Pipeline] sh 00:01:11.920 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.836 [Pipeline] sh 00:01:14.121 + git -C dpdk log --oneline -n5 00:01:14.121 eeb0605f11 version: 23.11.0 00:01:14.121 238778122a doc: update release notes for 23.11 00:01:14.121 46aa6b3cfc doc: fix description of RSS features 00:01:14.121 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:14.121 7e421ae345 devtools: support skipping forbid rule check 00:01:14.134 [Pipeline] } 00:01:14.155 [Pipeline] // stage 00:01:14.167 [Pipeline] stage 00:01:14.170 [Pipeline] { (Prepare) 00:01:14.192 [Pipeline] writeFile 00:01:14.209 [Pipeline] sh 00:01:14.492 + logger -p user.info -t JENKINS-CI 00:01:14.505 [Pipeline] sh 00:01:14.812 + logger -p user.info -t JENKINS-CI 00:01:14.823 [Pipeline] sh 00:01:15.102 + cat autorun-spdk.conf 00:01:15.102 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.102 SPDK_TEST_NVMF=1 00:01:15.102 SPDK_TEST_NVME_CLI=1 00:01:15.102 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.102 SPDK_TEST_NVMF_NICS=e810 00:01:15.102 SPDK_TEST_VFIOUSER=1 00:01:15.102 SPDK_RUN_UBSAN=1 00:01:15.102 NET_TYPE=phy 00:01:15.102 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.102 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.110 RUN_NIGHTLY=1 00:01:15.115 [Pipeline] readFile 00:01:15.143 [Pipeline] withEnv 00:01:15.146 [Pipeline] { 00:01:15.159 [Pipeline] sh 00:01:15.441 + set -ex 00:01:15.441 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:15.441 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.441 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.441 ++ SPDK_TEST_NVMF=1 00:01:15.441 ++ SPDK_TEST_NVME_CLI=1 00:01:15.441 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.441 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.441 ++ SPDK_TEST_VFIOUSER=1 00:01:15.441 ++ SPDK_RUN_UBSAN=1 00:01:15.441 ++ NET_TYPE=phy 00:01:15.441 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.441 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.441 ++ RUN_NIGHTLY=1 00:01:15.441 + case $SPDK_TEST_NVMF_NICS in 00:01:15.441 + DRIVERS=ice 00:01:15.441 + [[ tcp == \r\d\m\a ]] 00:01:15.441 + [[ -n ice ]] 00:01:15.441 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.441 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.441 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.441 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.441 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.441 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.441 + true 00:01:15.441 + for D in $DRIVERS 00:01:15.441 + sudo modprobe ice 00:01:15.441 + exit 0 00:01:15.451 [Pipeline] } 00:01:15.472 [Pipeline] // withEnv 00:01:15.477 [Pipeline] } 00:01:15.496 [Pipeline] // stage 00:01:15.508 [Pipeline] catchError 00:01:15.510 [Pipeline] { 00:01:15.527 [Pipeline] timeout 00:01:15.528 Timeout set to expire in 50 min 00:01:15.530 [Pipeline] { 00:01:15.548 [Pipeline] stage 00:01:15.551 [Pipeline] { (Tests) 00:01:15.567 [Pipeline] sh 00:01:15.848 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.848 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.848 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.848 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:15.848 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.848 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.848 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:15.848 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.848 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.848 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.848 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:15.848 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.848 + source /etc/os-release 00:01:15.848 ++ NAME='Fedora Linux' 00:01:15.848 ++ VERSION='38 (Cloud Edition)' 00:01:15.848 ++ ID=fedora 00:01:15.848 ++ VERSION_ID=38 00:01:15.848 ++ VERSION_CODENAME= 00:01:15.848 ++ PLATFORM_ID=platform:f38 00:01:15.848 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.848 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.848 ++ LOGO=fedora-logo-icon 00:01:15.848 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.848 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.848 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.848 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.848 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.848 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.848 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.848 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.848 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.848 ++ SUPPORT_END=2024-05-14 00:01:15.848 ++ VARIANT='Cloud Edition' 00:01:15.848 ++ VARIANT_ID=cloud 00:01:15.848 + uname -a 00:01:15.848 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.848 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:17.220 Hugepages 00:01:17.220 node hugesize free / total 00:01:17.220 node0 1048576kB 0 / 0 00:01:17.220 node0 2048kB 0 / 0 00:01:17.220 node1 1048576kB 0 / 0 00:01:17.220 node1 2048kB 0 / 0 00:01:17.220 00:01:17.220 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.220 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:17.220 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:17.220 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:17.220 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:17.220 + rm -f /tmp/spdk-ld-path 00:01:17.220 + source autorun-spdk.conf 00:01:17.220 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.220 ++ SPDK_TEST_NVMF=1 00:01:17.220 ++ SPDK_TEST_NVME_CLI=1 00:01:17.220 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.220 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.220 ++ SPDK_TEST_VFIOUSER=1 00:01:17.220 ++ SPDK_RUN_UBSAN=1 00:01:17.220 ++ NET_TYPE=phy 00:01:17.220 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.220 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:17.220 ++ RUN_NIGHTLY=1 00:01:17.220 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.220 + [[ -n '' ]] 00:01:17.220 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.220 + for M in /var/spdk/build-*-manifest.txt 00:01:17.220 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.220 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.220 + for M in /var/spdk/build-*-manifest.txt 00:01:17.220 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.220 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:17.220 ++ uname 00:01:17.220 + [[ Linux == \L\i\n\u\x ]] 00:01:17.220 + sudo dmesg -T 00:01:17.220 + sudo dmesg --clear 00:01:17.220 + dmesg_pid=80891 00:01:17.220 + sudo dmesg -Tw 00:01:17.220 + [[ Fedora Linux == FreeBSD ]] 00:01:17.220 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.220 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.220 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.220 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.220 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.220 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.220 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.220 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.220 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.220 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.220 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.220 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.220 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.220 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.220 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.220 Test configuration: 00:01:17.220 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.220 SPDK_TEST_NVMF=1 00:01:17.220 SPDK_TEST_NVME_CLI=1 00:01:17.220 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.220 SPDK_TEST_NVMF_NICS=e810 00:01:17.220 SPDK_TEST_VFIOUSER=1 00:01:17.220 SPDK_RUN_UBSAN=1 00:01:17.220 NET_TYPE=phy 00:01:17.220 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.220 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:17.220 RUN_NIGHTLY=1 16:00:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:17.220 16:00:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.220 16:00:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.220 16:00:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.220 16:00:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.220 16:00:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.220 16:00:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.220 16:00:00 -- paths/export.sh@5 -- $ export PATH 00:01:17.220 16:00:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.220 16:00:00 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:17.220 16:00:00 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:17.220 16:00:00 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721052000.XXXXXX 00:01:17.220 16:00:00 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721052000.C2ZZRk 00:01:17.220 16:00:00 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:17.220 16:00:00 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:17.220 16:00:00 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:17.220 16:00:00 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:17.220 16:00:00 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:17.220 16:00:00 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.220 16:00:00 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:17.220 16:00:00 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:17.220 16:00:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.220 16:00:00 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:17.220 16:00:00 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:17.220 16:00:00 -- pm/common@17 -- $ local monitor 00:01:17.220 16:00:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.220 16:00:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.220 16:00:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.220 16:00:00 -- pm/common@21 -- $ date +%s 00:01:17.220 16:00:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:17.220 16:00:00 -- pm/common@21 -- $ date +%s 00:01:17.220 16:00:00 -- pm/common@25 -- $ sleep 1 00:01:17.220 16:00:00 -- pm/common@21 -- $ date +%s 00:01:17.220 16:00:00 -- pm/common@21 -- $ date +%s 00:01:17.220 16:00:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052000 00:01:17.220 16:00:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052000 00:01:17.220 16:00:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052000 00:01:17.220 16:00:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721052000 00:01:17.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052000_collect-vmstat.pm.log 00:01:17.220 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052000_collect-cpu-load.pm.log 00:01:17.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052000_collect-cpu-temp.pm.log 00:01:17.221 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721052000_collect-bmc-pm.bmc.pm.log 00:01:18.153 16:00:01 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:18.153 16:00:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.153 16:00:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.153 16:00:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.153 16:00:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.153 Mon Jul 15 02:00:01 PM UTC 2024 00:01:18.153 16:00:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.153 v24.05-13-g5fa2f5086 00:01:18.153 16:00:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:18.153 16:00:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.153 16:00:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.153 16:00:01 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:18.153 16:00:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:18.153 16:00:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.153 ************************************ 00:01:18.153 START TEST ubsan 00:01:18.153 ************************************ 00:01:18.153 16:00:01 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:18.153 using ubsan 00:01:18.153 00:01:18.153 real 0m0.000s 00:01:18.153 user 0m0.000s 00:01:18.153 sys 0m0.000s 00:01:18.153 16:00:01 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:18.153 16:00:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:18.153 ************************************ 00:01:18.153 END TEST ubsan 00:01:18.153 ************************************ 00:01:18.410 16:00:01 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:18.410 16:00:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:18.410 16:00:01 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:18.410 16:00:01 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:18.410 16:00:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:18.410 16:00:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.410 ************************************ 00:01:18.410 START TEST build_native_dpdk 00:01:18.410 ************************************ 00:01:18.410 16:00:01 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:18.410 eeb0605f11 version: 23.11.0 00:01:18.410 238778122a doc: update release notes for 23.11 00:01:18.410 46aa6b3cfc doc: fix description of RSS features 00:01:18.410 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:18.410 7e421ae345 devtools: support skipping forbid rule check 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:18.410 16:00:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:18.410 16:00:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:18.411 16:00:01 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:18.411 patching file config/rte_config.h 00:01:18.411 Hunk #1 succeeded at 60 (offset 1 line). 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:18.411 16:00:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:22.598 The Meson build system 00:01:22.598 Version: 1.3.1 00:01:22.598 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.598 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:22.598 Build type: native build 00:01:22.598 Program cat found: YES (/usr/bin/cat) 00:01:22.598 Project name: DPDK 00:01:22.598 Project version: 23.11.0 00:01:22.598 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.598 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:22.598 Host machine cpu family: x86_64 00:01:22.598 Host machine cpu: x86_64 00:01:22.598 Message: ## Building in Developer Mode ## 00:01:22.598 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.598 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:22.598 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.598 Program python3 found: YES (/usr/bin/python3) 00:01:22.598 Program cat found: YES (/usr/bin/cat) 00:01:22.598 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:22.598 Compiler for C supports arguments -march=native: YES 00:01:22.598 Checking for size of "void *" : 8 00:01:22.598 Checking for size of "void *" : 8 (cached) 00:01:22.598 Library m found: YES 00:01:22.598 Library numa found: YES 00:01:22.598 Has header "numaif.h" : YES 00:01:22.598 Library fdt found: NO 00:01:22.598 Library execinfo found: NO 00:01:22.598 Has header "execinfo.h" : YES 00:01:22.598 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.598 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.598 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.598 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.598 Run-time dependency openssl found: YES 3.0.9 00:01:22.598 Run-time dependency libpcap found: YES 1.10.4 00:01:22.598 Has header "pcap.h" with dependency libpcap: YES 00:01:22.598 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.598 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.598 Compiler for C supports arguments -Wformat: YES 00:01:22.598 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.598 Compiler for C supports arguments -Wformat-security: NO 00:01:22.598 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.598 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.598 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.598 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.598 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.598 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.598 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.598 Compiler for C supports arguments -Wundef: YES 00:01:22.598 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.598 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.598 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.598 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.598 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.598 Program objdump found: YES (/usr/bin/objdump) 00:01:22.598 Compiler for C supports arguments -mavx512f: YES 00:01:22.598 Checking if "AVX512 checking" compiles: YES 00:01:22.598 Fetching value of define "__SSE4_2__" : 1 00:01:22.598 Fetching value of define "__AES__" : 1 00:01:22.598 Fetching value of define "__AVX__" : 1 00:01:22.598 Fetching value of define "__AVX2__" : (undefined) 00:01:22.598 Fetching value of define "__AVX512BW__" : (undefined) 00:01:22.598 Fetching value of define "__AVX512CD__" : (undefined) 00:01:22.598 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:22.598 Fetching value of define "__AVX512F__" : (undefined) 00:01:22.598 Fetching value of define "__AVX512VL__" : (undefined) 00:01:22.598 Fetching value of define "__PCLMUL__" : 1 00:01:22.598 Fetching value of define "__RDRND__" : 1 00:01:22.598 Fetching value of define "__RDSEED__" : (undefined) 00:01:22.598 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:22.598 Fetching value of define "__znver1__" : (undefined) 00:01:22.598 Fetching value of define "__znver2__" : (undefined) 00:01:22.598 Fetching value of define "__znver3__" : (undefined) 00:01:22.598 Fetching value of define "__znver4__" : (undefined) 00:01:22.598 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.598 Message: lib/log: Defining dependency "log" 00:01:22.598 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.598 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.598 Checking for function "getentropy" : NO 00:01:22.598 Message: lib/eal: Defining dependency "eal" 00:01:22.598 Message: lib/ring: Defining dependency "ring" 00:01:22.598 Message: lib/rcu: Defining dependency "rcu" 00:01:22.598 Message: lib/mempool: Defining dependency "mempool" 00:01:22.598 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.598 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.598 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.598 Compiler for C supports arguments -mpclmul: YES 00:01:22.598 Compiler for C supports arguments -maes: YES 00:01:22.598 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.598 Compiler for C supports arguments -mavx512bw: YES 00:01:22.598 Compiler for C supports arguments -mavx512dq: YES 00:01:22.598 Compiler for C supports arguments -mavx512vl: YES 00:01:22.598 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.598 Compiler for C supports arguments -mavx2: YES 00:01:22.598 Compiler for C supports arguments -mavx: YES 00:01:22.598 Message: lib/net: Defining dependency "net" 00:01:22.598 Message: lib/meter: Defining dependency "meter" 00:01:22.598 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.598 Message: lib/pci: Defining dependency "pci" 00:01:22.598 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.598 Message: lib/metrics: Defining dependency "metrics" 00:01:22.598 Message: lib/hash: Defining dependency "hash" 00:01:22.598 Message: lib/timer: Defining dependency "timer" 00:01:22.598 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:22.598 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:22.598 Message: lib/acl: Defining dependency "acl" 00:01:22.598 Message: lib/bbdev: Defining dependency "bbdev" 00:01:22.598 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:22.598 Run-time dependency libelf found: YES 0.190 00:01:22.598 Message: lib/bpf: Defining dependency "bpf" 00:01:22.598 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:22.598 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.598 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.598 Message: lib/distributor: Defining dependency "distributor" 00:01:22.598 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.598 Message: lib/efd: Defining dependency "efd" 00:01:22.598 Message: lib/eventdev: Defining dependency "eventdev" 00:01:22.598 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:22.598 Message: lib/gpudev: Defining dependency "gpudev" 00:01:22.598 Message: lib/gro: Defining dependency "gro" 00:01:22.598 Message: lib/gso: Defining dependency "gso" 00:01:22.598 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:22.598 Message: lib/jobstats: Defining dependency "jobstats" 00:01:22.598 Message: lib/latencystats: Defining dependency "latencystats" 00:01:22.598 Message: lib/lpm: Defining dependency "lpm" 00:01:22.598 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:22.598 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:22.598 Message: lib/member: Defining dependency "member" 00:01:22.598 Message: lib/pcapng: Defining dependency "pcapng" 00:01:22.598 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.598 Message: lib/power: Defining dependency "power" 00:01:22.598 Message: lib/rawdev: Defining dependency "rawdev" 00:01:22.598 Message: lib/regexdev: Defining dependency "regexdev" 00:01:22.598 Message: lib/mldev: Defining dependency "mldev" 00:01:22.598 Message: lib/rib: Defining dependency "rib" 00:01:22.598 Message: lib/reorder: Defining dependency "reorder" 00:01:22.598 Message: lib/sched: Defining dependency "sched" 00:01:22.598 Message: lib/security: Defining dependency "security" 00:01:22.598 Message: lib/stack: Defining dependency "stack" 00:01:22.598 Has header "linux/userfaultfd.h" : YES 00:01:22.598 Has header "linux/vduse.h" : YES 00:01:22.598 Message: lib/vhost: Defining dependency "vhost" 00:01:22.598 Message: lib/ipsec: Defining dependency "ipsec" 00:01:22.598 Message: lib/pdcp: Defining dependency "pdcp" 00:01:22.598 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.598 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:22.598 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:22.598 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:22.598 Message: lib/fib: Defining dependency "fib" 00:01:22.598 Message: lib/port: Defining dependency "port" 00:01:22.598 Message: lib/pdump: Defining dependency "pdump" 00:01:22.598 Message: lib/table: Defining dependency "table" 00:01:22.598 Message: lib/pipeline: Defining dependency "pipeline" 00:01:22.598 Message: lib/graph: Defining dependency "graph" 00:01:22.598 Message: lib/node: Defining dependency "node" 00:01:23.977 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:23.977 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:23.977 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:23.977 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:23.977 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:23.977 Compiler for C supports arguments -Wno-unused-value: YES 00:01:23.977 Compiler for C supports arguments -Wno-format: YES 00:01:23.977 Compiler for C supports arguments -Wno-format-security: YES 00:01:23.977 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:23.977 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:23.977 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:23.977 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:23.977 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:23.977 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:23.978 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:23.978 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:23.978 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:23.978 Has header "sys/epoll.h" : YES 00:01:23.978 Program doxygen found: YES (/usr/bin/doxygen) 00:01:23.978 Configuring doxy-api-html.conf using configuration 00:01:23.978 Configuring doxy-api-man.conf using configuration 00:01:23.978 Program mandb found: YES (/usr/bin/mandb) 00:01:23.978 Program sphinx-build found: NO 00:01:23.978 Configuring rte_build_config.h using configuration 00:01:23.978 Message: 00:01:23.978 ================= 00:01:23.978 Applications Enabled 00:01:23.978 ================= 00:01:23.978 00:01:23.978 apps: 00:01:23.978 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:23.978 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:23.978 test-pmd, test-regex, test-sad, test-security-perf, 00:01:23.978 00:01:23.978 Message: 00:01:23.978 ================= 00:01:23.978 Libraries Enabled 00:01:23.978 ================= 00:01:23.978 00:01:23.978 libs: 00:01:23.978 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:23.978 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:23.978 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:23.978 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:23.978 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:23.978 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:23.978 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:23.978 00:01:23.978 00:01:23.978 Message: 00:01:23.978 =============== 00:01:23.978 Drivers Enabled 00:01:23.978 =============== 00:01:23.978 00:01:23.978 common: 00:01:23.978 00:01:23.978 bus: 00:01:23.978 pci, vdev, 00:01:23.978 mempool: 00:01:23.978 ring, 00:01:23.978 dma: 00:01:23.978 00:01:23.978 net: 00:01:23.978 i40e, 00:01:23.978 raw: 00:01:23.978 00:01:23.978 crypto: 00:01:23.978 00:01:23.978 compress: 00:01:23.978 00:01:23.978 regex: 00:01:23.978 00:01:23.978 ml: 00:01:23.978 00:01:23.978 vdpa: 00:01:23.978 00:01:23.978 event: 00:01:23.978 00:01:23.978 baseband: 00:01:23.978 00:01:23.978 gpu: 00:01:23.978 00:01:23.978 00:01:23.978 Message: 00:01:23.978 ================= 00:01:23.978 Content Skipped 00:01:23.978 ================= 00:01:23.978 00:01:23.978 apps: 00:01:23.978 00:01:23.978 libs: 00:01:23.978 00:01:23.978 drivers: 00:01:23.978 common/cpt: not in enabled drivers build config 00:01:23.978 common/dpaax: not in enabled drivers build config 00:01:23.978 common/iavf: not in enabled drivers build config 00:01:23.978 common/idpf: not in enabled drivers build config 00:01:23.978 common/mvep: not in enabled drivers build config 00:01:23.978 common/octeontx: not in enabled drivers build config 00:01:23.978 bus/auxiliary: not in enabled drivers build config 00:01:23.978 bus/cdx: not in enabled drivers build config 00:01:23.978 bus/dpaa: not in enabled drivers build config 00:01:23.978 bus/fslmc: not in enabled drivers build config 00:01:23.978 bus/ifpga: not in enabled drivers build config 00:01:23.978 bus/platform: not in enabled drivers build config 00:01:23.978 bus/vmbus: not in enabled drivers build config 00:01:23.978 common/cnxk: not in enabled drivers build config 00:01:23.978 common/mlx5: not in enabled drivers build config 00:01:23.978 common/nfp: not in enabled drivers build config 00:01:23.978 common/qat: not in enabled drivers build config 00:01:23.978 common/sfc_efx: not in enabled drivers build config 00:01:23.978 mempool/bucket: not in enabled drivers build config 00:01:23.978 mempool/cnxk: not in enabled drivers build config 00:01:23.978 mempool/dpaa: not in enabled drivers build config 00:01:23.978 mempool/dpaa2: not in enabled drivers build config 00:01:23.978 mempool/octeontx: not in enabled drivers build config 00:01:23.978 mempool/stack: not in enabled drivers build config 00:01:23.978 dma/cnxk: not in enabled drivers build config 00:01:23.978 dma/dpaa: not in enabled drivers build config 00:01:23.978 dma/dpaa2: not in enabled drivers build config 00:01:23.978 dma/hisilicon: not in enabled drivers build config 00:01:23.978 dma/idxd: not in enabled drivers build config 00:01:23.978 dma/ioat: not in enabled drivers build config 00:01:23.978 dma/skeleton: not in enabled drivers build config 00:01:23.978 net/af_packet: not in enabled drivers build config 00:01:23.978 net/af_xdp: not in enabled drivers build config 00:01:23.978 net/ark: not in enabled drivers build config 00:01:23.978 net/atlantic: not in enabled drivers build config 00:01:23.978 net/avp: not in enabled drivers build config 00:01:23.978 net/axgbe: not in enabled drivers build config 00:01:23.978 net/bnx2x: not in enabled drivers build config 00:01:23.978 net/bnxt: not in enabled drivers build config 00:01:23.978 net/bonding: not in enabled drivers build config 00:01:23.978 net/cnxk: not in enabled drivers build config 00:01:23.978 net/cpfl: not in enabled drivers build config 00:01:23.978 net/cxgbe: not in enabled drivers build config 00:01:23.978 net/dpaa: not in enabled drivers build config 00:01:23.978 net/dpaa2: not in enabled drivers build config 00:01:23.978 net/e1000: not in enabled drivers build config 00:01:23.978 net/ena: not in enabled drivers build config 00:01:23.978 net/enetc: not in enabled drivers build config 00:01:23.978 net/enetfec: not in enabled drivers build config 00:01:23.978 net/enic: not in enabled drivers build config 00:01:23.978 net/failsafe: not in enabled drivers build config 00:01:23.978 net/fm10k: not in enabled drivers build config 00:01:23.978 net/gve: not in enabled drivers build config 00:01:23.978 net/hinic: not in enabled drivers build config 00:01:23.978 net/hns3: not in enabled drivers build config 00:01:23.978 net/iavf: not in enabled drivers build config 00:01:23.978 net/ice: not in enabled drivers build config 00:01:23.978 net/idpf: not in enabled drivers build config 00:01:23.978 net/igc: not in enabled drivers build config 00:01:23.978 net/ionic: not in enabled drivers build config 00:01:23.978 net/ipn3ke: not in enabled drivers build config 00:01:23.978 net/ixgbe: not in enabled drivers build config 00:01:23.978 net/mana: not in enabled drivers build config 00:01:23.978 net/memif: not in enabled drivers build config 00:01:23.978 net/mlx4: not in enabled drivers build config 00:01:23.978 net/mlx5: not in enabled drivers build config 00:01:23.978 net/mvneta: not in enabled drivers build config 00:01:23.978 net/mvpp2: not in enabled drivers build config 00:01:23.978 net/netvsc: not in enabled drivers build config 00:01:23.978 net/nfb: not in enabled drivers build config 00:01:23.978 net/nfp: not in enabled drivers build config 00:01:23.978 net/ngbe: not in enabled drivers build config 00:01:23.978 net/null: not in enabled drivers build config 00:01:23.978 net/octeontx: not in enabled drivers build config 00:01:23.978 net/octeon_ep: not in enabled drivers build config 00:01:23.978 net/pcap: not in enabled drivers build config 00:01:23.978 net/pfe: not in enabled drivers build config 00:01:23.978 net/qede: not in enabled drivers build config 00:01:23.978 net/ring: not in enabled drivers build config 00:01:23.978 net/sfc: not in enabled drivers build config 00:01:23.978 net/softnic: not in enabled drivers build config 00:01:23.978 net/tap: not in enabled drivers build config 00:01:23.978 net/thunderx: not in enabled drivers build config 00:01:23.978 net/txgbe: not in enabled drivers build config 00:01:23.978 net/vdev_netvsc: not in enabled drivers build config 00:01:23.978 net/vhost: not in enabled drivers build config 00:01:23.978 net/virtio: not in enabled drivers build config 00:01:23.978 net/vmxnet3: not in enabled drivers build config 00:01:23.978 raw/cnxk_bphy: not in enabled drivers build config 00:01:23.978 raw/cnxk_gpio: not in enabled drivers build config 00:01:23.978 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:23.978 raw/ifpga: not in enabled drivers build config 00:01:23.978 raw/ntb: not in enabled drivers build config 00:01:23.978 raw/skeleton: not in enabled drivers build config 00:01:23.978 crypto/armv8: not in enabled drivers build config 00:01:23.978 crypto/bcmfs: not in enabled drivers build config 00:01:23.978 crypto/caam_jr: not in enabled drivers build config 00:01:23.978 crypto/ccp: not in enabled drivers build config 00:01:23.978 crypto/cnxk: not in enabled drivers build config 00:01:23.978 crypto/dpaa_sec: not in enabled drivers build config 00:01:23.978 crypto/dpaa2_sec: not in enabled drivers build config 00:01:23.978 crypto/ipsec_mb: not in enabled drivers build config 00:01:23.978 crypto/mlx5: not in enabled drivers build config 00:01:23.978 crypto/mvsam: not in enabled drivers build config 00:01:23.978 crypto/nitrox: not in enabled drivers build config 00:01:23.978 crypto/null: not in enabled drivers build config 00:01:23.978 crypto/octeontx: not in enabled drivers build config 00:01:23.978 crypto/openssl: not in enabled drivers build config 00:01:23.978 crypto/scheduler: not in enabled drivers build config 00:01:23.978 crypto/uadk: not in enabled drivers build config 00:01:23.978 crypto/virtio: not in enabled drivers build config 00:01:23.978 compress/isal: not in enabled drivers build config 00:01:23.978 compress/mlx5: not in enabled drivers build config 00:01:23.978 compress/octeontx: not in enabled drivers build config 00:01:23.978 compress/zlib: not in enabled drivers build config 00:01:23.978 regex/mlx5: not in enabled drivers build config 00:01:23.978 regex/cn9k: not in enabled drivers build config 00:01:23.978 ml/cnxk: not in enabled drivers build config 00:01:23.978 vdpa/ifc: not in enabled drivers build config 00:01:23.978 vdpa/mlx5: not in enabled drivers build config 00:01:23.978 vdpa/nfp: not in enabled drivers build config 00:01:23.978 vdpa/sfc: not in enabled drivers build config 00:01:23.978 event/cnxk: not in enabled drivers build config 00:01:23.978 event/dlb2: not in enabled drivers build config 00:01:23.978 event/dpaa: not in enabled drivers build config 00:01:23.978 event/dpaa2: not in enabled drivers build config 00:01:23.978 event/dsw: not in enabled drivers build config 00:01:23.978 event/opdl: not in enabled drivers build config 00:01:23.978 event/skeleton: not in enabled drivers build config 00:01:23.978 event/sw: not in enabled drivers build config 00:01:23.978 event/octeontx: not in enabled drivers build config 00:01:23.978 baseband/acc: not in enabled drivers build config 00:01:23.978 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:23.978 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:23.978 baseband/la12xx: not in enabled drivers build config 00:01:23.978 baseband/null: not in enabled drivers build config 00:01:23.979 baseband/turbo_sw: not in enabled drivers build config 00:01:23.979 gpu/cuda: not in enabled drivers build config 00:01:23.979 00:01:23.979 00:01:23.979 Build targets in project: 220 00:01:23.979 00:01:23.979 DPDK 23.11.0 00:01:23.979 00:01:23.979 User defined options 00:01:23.979 libdir : lib 00:01:23.979 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.979 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:23.979 c_link_args : 00:01:23.979 enable_docs : false 00:01:23.979 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:23.979 enable_kmods : false 00:01:23.979 machine : native 00:01:23.979 tests : false 00:01:23.979 00:01:23.979 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.979 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:23.979 16:00:06 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:23.979 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:24.240 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:24.240 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:24.240 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:24.240 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:24.240 [5/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:24.240 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:24.240 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:24.240 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:24.240 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:24.240 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:24.240 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:24.240 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:24.240 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:24.240 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:24.240 [15/710] Linking static target lib/librte_kvargs.a 00:01:24.240 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:24.240 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:24.497 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:24.497 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:24.497 [20/710] Linking static target lib/librte_log.a 00:01:24.497 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:24.758 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.024 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.285 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:25.285 [25/710] Linking target lib/librte_log.so.24.0 00:01:25.285 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.285 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.285 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.285 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:25.285 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.285 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.285 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.285 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.285 [34/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:25.285 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:25.285 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:25.285 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.285 [38/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.285 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:25.285 [40/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.285 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.285 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.285 [43/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.286 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.286 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.286 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.286 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.286 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.286 [49/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.544 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.544 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.544 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.544 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.544 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.544 [55/710] Linking target lib/librte_kvargs.so.24.0 00:01:25.544 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.544 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.544 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.544 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.544 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.544 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.544 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.544 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.544 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.813 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:25.813 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.813 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:26.073 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:26.073 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:26.073 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:26.073 [71/710] Linking static target lib/librte_pci.a 00:01:26.073 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:26.073 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:26.073 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:26.073 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:26.336 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:26.336 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:26.336 [78/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.336 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:26.336 [80/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:26.336 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:26.336 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:26.336 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:26.336 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:26.336 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.336 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:26.336 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:26.604 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:26.604 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:26.604 [90/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:26.604 [91/710] Linking static target lib/librte_ring.a 00:01:26.604 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:26.604 [93/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:26.604 [94/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:26.604 [95/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:26.604 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:26.604 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:26.604 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:26.604 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:26.604 [100/710] Linking static target lib/librte_meter.a 00:01:26.604 [101/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:26.604 [102/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:26.865 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:26.865 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:26.865 [105/710] Linking static target lib/librte_telemetry.a 00:01:26.865 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:26.865 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:26.865 [108/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:26.865 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:26.865 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:26.865 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:26.865 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:26.865 [113/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:26.865 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.865 [115/710] Linking static target lib/librte_eal.a 00:01:26.865 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:26.865 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.125 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.125 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:27.125 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:27.125 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.125 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.125 [123/710] Linking static target lib/librte_net.a 00:01:27.125 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.125 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.385 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:27.385 [127/710] Linking static target lib/librte_cmdline.a 00:01:27.385 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.385 [129/710] Linking static target lib/librte_mempool.a 00:01:27.385 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.649 [131/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:27.649 [132/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.649 [133/710] Linking static target lib/librte_cfgfile.a 00:01:27.649 [134/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.649 [135/710] Linking target lib/librte_telemetry.so.24.0 00:01:27.649 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:27.649 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:27.649 [138/710] Linking static target lib/librte_metrics.a 00:01:27.649 [139/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:27.649 [140/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:27.649 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.649 [142/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:27.910 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:27.910 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:27.910 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:27.910 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:27.910 [147/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:27.910 [148/710] Linking static target lib/librte_bitratestats.a 00:01:28.182 [149/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.182 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:28.182 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:28.182 [152/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:28.182 [153/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:28.182 [154/710] Linking static target lib/librte_rcu.a 00:01:28.182 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:28.182 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.182 [157/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:28.182 [158/710] Linking static target lib/librte_timer.a 00:01:28.182 [159/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:28.447 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:28.447 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:28.447 [162/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:28.447 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.447 [164/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.448 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:28.448 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:28.713 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:28.713 [168/710] Linking static target lib/librte_bbdev.a 00:01:28.713 [169/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.713 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:28.713 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.713 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:28.713 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:28.713 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:28.971 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:28.971 [176/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:28.971 [177/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.971 [178/710] Linking static target lib/librte_compressdev.a 00:01:28.971 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:28.971 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:29.232 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:29.232 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.232 [183/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:29.232 [184/710] Linking static target lib/librte_distributor.a 00:01:29.232 [185/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:29.492 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:29.492 [187/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:29.750 [188/710] Linking static target lib/librte_dmadev.a 00:01:29.750 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:29.750 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.750 [191/710] Linking static target lib/librte_bpf.a 00:01:29.750 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:29.751 [193/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:29.751 [194/710] Linking static target lib/librte_dispatcher.a 00:01:29.751 [195/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.751 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:29.751 [197/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.751 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:30.012 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:30.012 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:30.012 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:30.012 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:30.012 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:30.012 [204/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:30.012 [205/710] Linking static target lib/librte_gpudev.a 00:01:30.012 [206/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:30.012 [207/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:30.012 [208/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:30.012 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.012 [210/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:30.012 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:30.272 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:30.272 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.272 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:30.272 [215/710] Linking static target lib/librte_gro.a 00:01:30.272 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:30.272 [217/710] Linking static target lib/librte_jobstats.a 00:01:30.272 [218/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.272 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:30.272 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:30.533 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:30.533 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.533 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.799 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:30.799 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.799 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:30.799 [227/710] Linking static target lib/librte_latencystats.a 00:01:30.799 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:30.799 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:30.799 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:30.799 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:30.799 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:31.059 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:31.059 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:31.059 [235/710] Linking static target lib/librte_ip_frag.a 00:01:31.059 [236/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:31.059 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:31.059 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.318 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.318 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.318 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:31.318 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:31.318 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.318 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:31.580 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.580 [246/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:31.580 [247/710] Linking static target lib/librte_gso.a 00:01:31.580 [248/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:31.580 [249/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.843 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:31.843 [251/710] Linking static target lib/librte_regexdev.a 00:01:31.843 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.843 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:31.843 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:31.843 [255/710] Linking static target lib/librte_rawdev.a 00:01:31.843 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:31.843 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:31.843 [258/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:31.843 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.101 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:32.101 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:32.101 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:32.101 [263/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:32.101 [264/710] Linking static target lib/librte_mldev.a 00:01:32.101 [265/710] Linking static target lib/librte_efd.a 00:01:32.101 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:32.101 [267/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:32.102 [268/710] Linking static target lib/librte_pcapng.a 00:01:32.102 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:32.102 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:32.102 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:32.102 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:32.102 [273/710] Linking static target lib/librte_stack.a 00:01:32.362 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:32.362 [275/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:32.362 [276/710] Linking static target lib/librte_lpm.a 00:01:32.362 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.362 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.362 [279/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.362 [280/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.362 [281/710] Linking static target lib/librte_hash.a 00:01:32.362 [282/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.627 [283/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.627 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.627 [285/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.627 [286/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:32.627 [287/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.627 [288/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.627 [289/710] Linking static target lib/librte_reorder.a 00:01:32.886 [290/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:32.886 [291/710] Linking static target lib/acl/libavx512_tmp.a 00:01:32.886 [292/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.886 [293/710] Linking static target lib/librte_acl.a 00:01:32.886 [294/710] Linking static target lib/librte_power.a 00:01:32.886 [295/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.886 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.886 [297/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.886 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.886 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.886 [300/710] Linking static target lib/librte_security.a 00:01:33.150 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:33.150 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.150 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.150 [304/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [305/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:33.415 [306/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:33.415 [308/710] Linking static target lib/librte_rib.a 00:01:33.415 [309/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:33.415 [310/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.415 [311/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:33.415 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:33.415 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:33.415 [314/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.415 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:33.415 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:33.415 [317/710] Linking static target lib/librte_mbuf.a 00:01:33.674 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.674 [319/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:33.674 [320/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:33.674 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:33.674 [322/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:33.674 [323/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:33.674 [324/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:33.674 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:33.935 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.935 [327/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.200 [328/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.200 [329/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:34.200 [330/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:34.200 [331/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:34.460 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:34.460 [333/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.460 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:34.460 [335/710] Linking static target lib/librte_eventdev.a 00:01:34.460 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:34.460 [337/710] Linking static target lib/librte_member.a 00:01:34.460 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:34.724 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.724 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:34.724 [341/710] Linking static target lib/librte_cryptodev.a 00:01:34.724 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:34.724 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:35.000 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:35.000 [345/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:35.000 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:35.000 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:35.000 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:35.000 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:35.000 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:35.000 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:35.000 [352/710] Linking static target lib/librte_sched.a 00:01:35.000 [353/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:35.000 [354/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.000 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:35.000 [356/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.000 [357/710] Linking static target lib/librte_ethdev.a 00:01:35.000 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:35.000 [359/710] Linking static target lib/librte_fib.a 00:01:35.264 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:35.264 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:35.264 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:35.264 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:35.264 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:35.577 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:35.577 [366/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:35.577 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.577 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:35.577 [369/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.841 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.841 [371/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:35.841 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:35.841 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:35.841 [374/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:35.841 [375/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:36.102 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:36.102 [377/710] Linking static target lib/librte_pdump.a 00:01:36.102 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.102 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:36.102 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:36.102 [381/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:36.102 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:36.102 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:36.363 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:36.363 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:36.363 [386/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:36.363 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.363 [388/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:36.363 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:36.363 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.621 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:36.621 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:36.621 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:36.621 [394/710] Linking static target lib/librte_ipsec.a 00:01:36.621 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:36.621 [396/710] Linking static target lib/librte_table.a 00:01:36.621 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.939 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:36.939 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:36.939 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:37.260 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:37.260 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.260 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.526 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:37.526 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:37.526 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:37.526 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.526 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.526 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.526 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:37.526 [411/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:37.788 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.788 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.788 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:37.788 [415/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:38.052 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.052 [417/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.052 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.052 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.052 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.052 [421/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.052 [422/710] Linking static target drivers/librte_bus_vdev.a 00:01:38.052 [423/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:38.320 [424/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.320 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:38.320 [426/710] Linking static target lib/librte_port.a 00:01:38.320 [427/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:38.320 [428/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.320 [429/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.320 [430/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.320 [431/710] Linking static target drivers/librte_bus_pci.a 00:01:38.584 [432/710] Linking target lib/librte_eal.so.24.0 00:01:38.584 [433/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:38.584 [434/710] Linking static target lib/librte_graph.a 00:01:38.584 [435/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.584 [436/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.584 [437/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:38.584 [438/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:38.584 [439/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:38.847 [440/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:38.847 [441/710] Linking target lib/librte_ring.so.24.0 00:01:38.847 [442/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:38.847 [443/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.847 [444/710] Linking target lib/librte_pci.so.24.0 00:01:38.847 [445/710] Linking target lib/librte_meter.so.24.0 00:01:39.112 [446/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:39.112 [447/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:39.112 [448/710] Linking target lib/librte_timer.so.24.0 00:01:39.112 [449/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:39.112 [450/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:39.112 [451/710] Linking target lib/librte_rcu.so.24.0 00:01:39.112 [452/710] Linking target lib/librte_mempool.so.24.0 00:01:39.112 [453/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:39.112 [454/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:39.112 [455/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:39.112 [456/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.112 [457/710] Linking target lib/librte_acl.so.24.0 00:01:39.375 [458/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.375 [459/710] Linking target lib/librte_dmadev.so.24.0 00:01:39.375 [460/710] Linking target lib/librte_cfgfile.so.24.0 00:01:39.375 [461/710] Linking target lib/librte_jobstats.so.24.0 00:01:39.375 [462/710] Linking target lib/librte_rawdev.so.24.0 00:01:39.375 [463/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.375 [464/710] Linking target lib/librte_stack.so.24.0 00:01:39.375 [465/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:39.375 [466/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:39.375 [467/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:39.375 [468/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:39.375 [469/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:39.375 [470/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:39.375 [471/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:39.375 [472/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:39.375 [473/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:39.640 [474/710] Linking target lib/librte_rib.so.24.0 00:01:39.640 [475/710] Linking target lib/librte_mbuf.so.24.0 00:01:39.640 [476/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:39.640 [477/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:39.640 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:39.640 [479/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:39.640 [480/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.640 [481/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:39.640 [482/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:39.640 [483/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:39.640 [484/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:39.640 [485/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:39.640 [486/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:39.640 [487/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.640 [488/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:39.640 [489/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.640 [490/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.640 [491/710] Linking static target drivers/librte_mempool_ring.a 00:01:39.899 [492/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:39.899 [493/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:39.899 [494/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:39.899 [495/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:39.899 [496/710] Linking target lib/librte_fib.so.24.0 00:01:39.899 [497/710] Linking target lib/librte_net.so.24.0 00:01:39.899 [498/710] Linking target lib/librte_bbdev.so.24.0 00:01:39.899 [499/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:39.899 [500/710] Linking target lib/librte_compressdev.so.24.0 00:01:39.899 [501/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:39.899 [502/710] Linking target lib/librte_distributor.so.24.0 00:01:39.899 [503/710] Linking target lib/librte_cryptodev.so.24.0 00:01:40.164 [504/710] Linking target lib/librte_gpudev.so.24.0 00:01:40.164 [505/710] Linking target lib/librte_regexdev.so.24.0 00:01:40.164 [506/710] Linking target lib/librte_mldev.so.24.0 00:01:40.164 [507/710] Linking target lib/librte_reorder.so.24.0 00:01:40.164 [508/710] Linking target lib/librte_sched.so.24.0 00:01:40.164 [509/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:40.164 [510/710] Linking target lib/librte_cmdline.so.24.0 00:01:40.164 [511/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:40.164 [512/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:40.423 [513/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:40.423 [514/710] Linking target lib/librte_hash.so.24.0 00:01:40.423 [515/710] Linking target lib/librte_security.so.24.0 00:01:40.423 [516/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:40.423 [517/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:40.423 [518/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:40.686 [519/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:40.686 [520/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:40.686 [521/710] Linking target lib/librte_efd.so.24.0 00:01:40.686 [522/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:40.686 [523/710] Linking target lib/librte_lpm.so.24.0 00:01:40.686 [524/710] Linking target lib/librte_member.so.24.0 00:01:40.951 [525/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:40.951 [526/710] Linking target lib/librte_ipsec.so.24.0 00:01:40.951 [527/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:40.951 [528/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:40.951 [529/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:40.951 [530/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:41.217 [531/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:41.217 [532/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:41.217 [533/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:41.217 [534/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:41.217 [535/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:41.217 [536/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:41.217 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:41.217 [538/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:41.217 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:41.217 [540/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:41.481 [541/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:41.481 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:41.744 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:41.744 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:41.744 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:41.744 [546/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:42.005 [547/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:42.005 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:42.005 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:42.005 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:42.005 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:42.005 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:42.266 [553/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:42.266 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:42.266 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:42.266 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:42.532 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:42.532 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:42.532 [559/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:42.794 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:43.057 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:43.057 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:43.057 [563/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:43.057 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:43.319 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:43.319 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.319 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:43.319 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:43.319 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:43.580 [570/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:43.580 [571/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:43.580 [572/710] Linking target lib/librte_ethdev.so.24.0 00:01:43.580 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:43.844 [574/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:43.844 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:43.844 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:43.844 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:43.844 [578/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:43.844 [579/710] Linking target lib/librte_metrics.so.24.0 00:01:43.844 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:44.105 [581/710] Linking target lib/librte_bpf.so.24.0 00:01:44.105 [582/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:44.105 [583/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:44.105 [584/710] Linking target lib/librte_eventdev.so.24.0 00:01:44.105 [585/710] Linking target lib/librte_gro.so.24.0 00:01:44.105 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:44.105 [587/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:44.105 [588/710] Linking target lib/librte_gso.so.24.0 00:01:44.105 [589/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:44.105 [590/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:44.105 [591/710] Linking target lib/librte_ip_frag.so.24.0 00:01:44.105 [592/710] Linking target lib/librte_pcapng.so.24.0 00:01:44.105 [593/710] Linking target lib/librte_bitratestats.so.24.0 00:01:44.105 [594/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:44.105 [595/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:44.105 [596/710] Linking target lib/librte_latencystats.so.24.0 00:01:44.369 [597/710] Linking target lib/librte_power.so.24.0 00:01:44.369 [598/710] Linking static target lib/librte_pdcp.a 00:01:44.369 [599/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:44.369 [600/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:44.369 [601/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:44.369 [602/710] Linking target lib/librte_dispatcher.so.24.0 00:01:44.369 [603/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:44.369 [604/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:44.369 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:44.369 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:44.629 [607/710] Linking target lib/librte_pdump.so.24.0 00:01:44.629 [608/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:44.629 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:44.629 [610/710] Linking target lib/librte_graph.so.24.0 00:01:44.629 [611/710] Linking target lib/librte_port.so.24.0 00:01:44.629 [612/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:44.895 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:44.895 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:44.895 [615/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:44.895 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:44.895 [617/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:44.895 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:44.895 [619/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.895 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:44.895 [621/710] Linking target lib/librte_pdcp.so.24.0 00:01:44.895 [622/710] Linking target lib/librte_table.so.24.0 00:01:44.895 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:45.156 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:45.156 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:45.156 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:45.156 [627/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:45.417 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:45.417 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:45.676 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:45.676 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:45.676 [632/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:45.934 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:45.934 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:45.934 [635/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:45.934 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:45.934 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:45.934 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:46.193 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:46.193 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:46.193 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:46.193 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:46.193 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:46.193 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:46.451 [645/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:46.451 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:46.451 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:46.451 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:46.710 [649/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:46.710 [650/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:46.967 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:46.967 [652/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:46.967 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:46.967 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:46.967 [655/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:46.967 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:47.225 [657/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:47.225 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:47.225 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:47.483 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:47.483 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:47.483 [662/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:47.483 [663/710] Linking static target drivers/librte_net_i40e.a 00:01:47.740 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:48.022 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:48.022 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.022 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:48.022 [668/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:48.280 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:48.280 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:48.844 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:48.844 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:48.844 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:48.844 [674/710] Linking static target lib/librte_node.a 00:01:49.101 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.359 [676/710] Linking target lib/librte_node.so.24.0 00:01:50.731 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:50.731 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:50.731 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:52.631 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:52.631 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:59.184 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.317 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.317 [684/710] Linking static target lib/librte_vhost.a 00:02:31.317 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.317 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:41.283 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:41.283 [688/710] Linking static target lib/librte_pipeline.a 00:02:41.283 [689/710] Linking target app/dpdk-test-acl 00:02:41.283 [690/710] Linking target app/dpdk-dumpcap 00:02:41.283 [691/710] Linking target app/dpdk-test-cmdline 00:02:41.283 [692/710] Linking target app/dpdk-test-security-perf 00:02:41.283 [693/710] Linking target app/dpdk-graph 00:02:41.283 [694/710] Linking target app/dpdk-test-dma-perf 00:02:41.283 [695/710] Linking target app/dpdk-test-regex 00:02:41.283 [696/710] Linking target app/dpdk-proc-info 00:02:41.283 [697/710] Linking target app/dpdk-test-bbdev 00:02:41.283 [698/710] Linking target app/dpdk-test-eventdev 00:02:41.283 [699/710] Linking target app/dpdk-test-crypto-perf 00:02:41.283 [700/710] Linking target app/dpdk-test-gpudev 00:02:41.283 [701/710] Linking target app/dpdk-test-compress-perf 00:02:41.283 [702/710] Linking target app/dpdk-test-fib 00:02:41.283 [703/710] Linking target app/dpdk-test-flow-perf 00:02:41.283 [704/710] Linking target app/dpdk-test-sad 00:02:41.283 [705/710] Linking target app/dpdk-pdump 00:02:41.283 [706/710] Linking target app/dpdk-test-pipeline 00:02:41.283 [707/710] Linking target app/dpdk-test-mldev 00:02:41.283 [708/710] Linking target app/dpdk-testpmd 00:02:43.182 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.439 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:43.439 16:01:26 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:43.439 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:43.439 [0/1] Installing files. 00:02:43.700 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.701 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.702 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:43.705 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.705 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:43.706 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.273 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.273 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.273 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.273 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:44.273 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.273 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.274 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.275 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.538 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:44.538 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:44.538 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:44.538 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:44.538 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:44.538 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:44.538 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:44.538 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:44.538 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:44.538 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:44.538 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:44.538 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:44.538 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:44.538 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:44.538 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:44.538 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:44.538 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:44.538 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:44.538 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:44.538 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:44.538 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:44.538 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:44.538 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:44.538 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:44.538 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:44.538 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:44.538 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:44.538 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:44.538 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:44.538 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:44.538 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:44.538 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:44.538 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:44.538 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:44.538 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:44.538 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:44.538 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:44.538 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:44.538 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:44.538 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:44.538 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:44.538 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:44.538 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:44.538 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:44.538 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:44.538 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:44.538 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:44.538 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:44.538 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:44.538 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:44.538 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:44.538 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:44.538 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:44.539 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:44.539 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:44.539 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:44.539 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:44.539 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:44.539 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:44.539 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:44.539 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:44.539 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:44.539 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:44.539 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:44.539 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:44.539 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:44.539 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:44.539 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:44.539 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:44.539 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:44.539 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:44.539 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:44.539 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:44.539 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:44.539 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:44.539 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:44.539 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:44.539 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:44.539 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:44.539 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:44.539 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:44.539 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:44.539 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:44.539 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:44.539 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:44.539 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:44.539 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:44.539 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:44.539 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:44.539 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:44.539 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:44.539 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:44.539 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:44.539 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:44.539 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:44.539 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:44.539 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:44.539 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:44.539 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:44.539 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:44.539 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:44.539 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:44.539 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:44.539 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:44.539 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:44.539 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:44.539 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:44.539 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:44.539 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:44.539 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:44.539 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:44.539 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:44.539 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:44.539 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:44.539 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:44.539 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:44.539 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:44.539 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:44.539 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:44.539 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:44.539 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:44.539 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:44.539 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:44.539 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:44.539 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:44.539 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:44.539 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:44.539 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:44.539 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:44.539 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:44.539 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:44.539 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:44.539 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:44.539 16:01:27 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:44.539 16:01:27 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:44.539 16:01:27 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:44.539 16:01:27 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.539 00:02:44.539 real 1m26.170s 00:02:44.539 user 18m13.058s 00:02:44.539 sys 2m8.272s 00:02:44.539 16:01:27 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:44.539 16:01:27 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:44.539 ************************************ 00:02:44.539 END TEST build_native_dpdk 00:02:44.539 ************************************ 00:02:44.539 16:01:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.539 16:01:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.539 16:01:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:44.539 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:44.797 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.797 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.797 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:45.056 Using 'verbs' RDMA provider 00:02:55.601 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:05.573 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:05.574 Creating mk/config.mk...done. 00:03:05.574 Creating mk/cc.flags.mk...done. 00:03:05.574 Type 'make' to build. 00:03:05.574 16:01:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:05.574 16:01:47 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:05.574 16:01:47 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:05.574 16:01:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.574 ************************************ 00:03:05.574 START TEST make 00:03:05.574 ************************************ 00:03:05.574 16:01:47 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:05.574 make[1]: Nothing to be done for 'all'. 00:03:06.518 The Meson build system 00:03:06.518 Version: 1.3.1 00:03:06.518 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:06.518 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:06.518 Build type: native build 00:03:06.518 Project name: libvfio-user 00:03:06.518 Project version: 0.0.1 00:03:06.518 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:06.518 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:06.518 Host machine cpu family: x86_64 00:03:06.518 Host machine cpu: x86_64 00:03:06.518 Run-time dependency threads found: YES 00:03:06.518 Library dl found: YES 00:03:06.518 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:06.518 Run-time dependency json-c found: YES 0.17 00:03:06.518 Run-time dependency cmocka found: YES 1.1.7 00:03:06.518 Program pytest-3 found: NO 00:03:06.518 Program flake8 found: NO 00:03:06.518 Program misspell-fixer found: NO 00:03:06.518 Program restructuredtext-lint found: NO 00:03:06.518 Program valgrind found: YES (/usr/bin/valgrind) 00:03:06.518 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.518 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.518 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.518 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.518 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:06.518 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:06.518 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:06.518 Build targets in project: 8 00:03:06.518 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:06.518 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:06.518 00:03:06.518 libvfio-user 0.0.1 00:03:06.518 00:03:06.518 User defined options 00:03:06.518 buildtype : debug 00:03:06.518 default_library: shared 00:03:06.518 libdir : /usr/local/lib 00:03:06.518 00:03:06.518 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:07.093 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:07.358 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:07.358 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:07.358 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:07.358 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:07.358 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:07.358 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:07.358 [7/37] Compiling C object samples/null.p/null.c.o 00:03:07.358 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:07.358 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:07.358 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:07.358 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:07.358 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:07.622 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:07.622 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:07.622 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:07.622 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:07.622 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:07.622 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:07.622 [19/37] Compiling C object samples/client.p/client.c.o 00:03:07.622 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:07.622 [21/37] Compiling C object samples/server.p/server.c.o 00:03:07.622 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:07.622 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:07.622 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:07.622 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:07.622 [26/37] Linking target samples/client 00:03:07.622 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:07.622 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:07.622 [29/37] Linking target test/unit_tests 00:03:07.622 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:07.885 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.151 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.151 [33/37] Linking target samples/server 00:03:08.151 [34/37] Linking target samples/null 00:03:08.151 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:08.151 [36/37] Linking target samples/lspci 00:03:08.151 [37/37] Linking target samples/gpio-pci-idio-16 00:03:08.151 INFO: autodetecting backend as ninja 00:03:08.151 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.151 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.098 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.098 ninja: no work to do. 00:03:21.344 CC lib/log/log.o 00:03:21.344 CC lib/log/log_flags.o 00:03:21.344 CC lib/log/log_deprecated.o 00:03:21.344 CC lib/ut/ut.o 00:03:21.344 CC lib/ut_mock/mock.o 00:03:21.344 LIB libspdk_ut.a 00:03:21.344 LIB libspdk_log.a 00:03:21.344 LIB libspdk_ut_mock.a 00:03:21.344 SO libspdk_ut.so.2.0 00:03:21.344 SO libspdk_ut_mock.so.6.0 00:03:21.344 SO libspdk_log.so.7.0 00:03:21.344 SYMLINK libspdk_ut.so 00:03:21.344 SYMLINK libspdk_ut_mock.so 00:03:21.344 SYMLINK libspdk_log.so 00:03:21.344 CC lib/ioat/ioat.o 00:03:21.344 CC lib/dma/dma.o 00:03:21.344 CXX lib/trace_parser/trace.o 00:03:21.344 CC lib/util/base64.o 00:03:21.344 CC lib/util/bit_array.o 00:03:21.344 CC lib/util/cpuset.o 00:03:21.344 CC lib/util/crc16.o 00:03:21.344 CC lib/util/crc32.o 00:03:21.344 CC lib/util/crc32c.o 00:03:21.344 CC lib/util/crc32_ieee.o 00:03:21.344 CC lib/util/crc64.o 00:03:21.344 CC lib/util/dif.o 00:03:21.344 CC lib/util/fd.o 00:03:21.344 CC lib/util/file.o 00:03:21.344 CC lib/util/hexlify.o 00:03:21.344 CC lib/util/iov.o 00:03:21.344 CC lib/util/math.o 00:03:21.344 CC lib/util/pipe.o 00:03:21.344 CC lib/util/strerror_tls.o 00:03:21.344 CC lib/util/string.o 00:03:21.344 CC lib/util/uuid.o 00:03:21.344 CC lib/util/fd_group.o 00:03:21.344 CC lib/util/xor.o 00:03:21.344 CC lib/util/zipf.o 00:03:21.344 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.344 CC lib/vfio_user/host/vfio_user.o 00:03:21.344 LIB libspdk_dma.a 00:03:21.344 SO libspdk_dma.so.4.0 00:03:21.344 SYMLINK libspdk_dma.so 00:03:21.344 LIB libspdk_ioat.a 00:03:21.344 SO libspdk_ioat.so.7.0 00:03:21.344 SYMLINK libspdk_ioat.so 00:03:21.344 LIB libspdk_vfio_user.a 00:03:21.344 SO libspdk_vfio_user.so.5.0 00:03:21.344 SYMLINK libspdk_vfio_user.so 00:03:21.344 LIB libspdk_util.a 00:03:21.344 SO libspdk_util.so.9.0 00:03:21.344 SYMLINK libspdk_util.so 00:03:21.344 CC lib/idxd/idxd.o 00:03:21.344 CC lib/json/json_parse.o 00:03:21.344 CC lib/conf/conf.o 00:03:21.344 CC lib/env_dpdk/env.o 00:03:21.344 CC lib/idxd/idxd_user.o 00:03:21.344 CC lib/json/json_util.o 00:03:21.344 CC lib/idxd/idxd_kernel.o 00:03:21.344 CC lib/env_dpdk/memory.o 00:03:21.344 CC lib/json/json_write.o 00:03:21.344 CC lib/env_dpdk/pci.o 00:03:21.344 CC lib/env_dpdk/init.o 00:03:21.344 CC lib/env_dpdk/threads.o 00:03:21.344 CC lib/vmd/vmd.o 00:03:21.344 CC lib/env_dpdk/pci_ioat.o 00:03:21.344 CC lib/vmd/led.o 00:03:21.344 CC lib/env_dpdk/pci_virtio.o 00:03:21.344 CC lib/rdma/common.o 00:03:21.344 CC lib/env_dpdk/pci_vmd.o 00:03:21.344 CC lib/rdma/rdma_verbs.o 00:03:21.344 CC lib/env_dpdk/pci_idxd.o 00:03:21.344 CC lib/env_dpdk/pci_event.o 00:03:21.344 CC lib/env_dpdk/sigbus_handler.o 00:03:21.344 CC lib/env_dpdk/pci_dpdk.o 00:03:21.344 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.344 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.344 LIB libspdk_trace_parser.a 00:03:21.344 SO libspdk_trace_parser.so.5.0 00:03:21.344 SYMLINK libspdk_trace_parser.so 00:03:21.344 LIB libspdk_rdma.a 00:03:21.344 LIB libspdk_json.a 00:03:21.344 LIB libspdk_conf.a 00:03:21.602 SO libspdk_conf.so.6.0 00:03:21.602 SO libspdk_rdma.so.6.0 00:03:21.602 SO libspdk_json.so.6.0 00:03:21.602 SYMLINK libspdk_conf.so 00:03:21.602 SYMLINK libspdk_rdma.so 00:03:21.602 SYMLINK libspdk_json.so 00:03:21.602 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.602 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.602 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.602 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.859 LIB libspdk_idxd.a 00:03:21.859 LIB libspdk_vmd.a 00:03:21.859 SO libspdk_idxd.so.12.0 00:03:21.859 SO libspdk_vmd.so.6.0 00:03:21.859 SYMLINK libspdk_idxd.so 00:03:21.859 SYMLINK libspdk_vmd.so 00:03:22.117 LIB libspdk_jsonrpc.a 00:03:22.117 SO libspdk_jsonrpc.so.6.0 00:03:22.117 SYMLINK libspdk_jsonrpc.so 00:03:22.375 CC lib/rpc/rpc.o 00:03:22.375 LIB libspdk_rpc.a 00:03:22.632 SO libspdk_rpc.so.6.0 00:03:22.632 SYMLINK libspdk_rpc.so 00:03:22.632 CC lib/trace/trace.o 00:03:22.632 CC lib/notify/notify.o 00:03:22.632 CC lib/keyring/keyring.o 00:03:22.632 CC lib/trace/trace_flags.o 00:03:22.632 CC lib/notify/notify_rpc.o 00:03:22.632 CC lib/keyring/keyring_rpc.o 00:03:22.633 CC lib/trace/trace_rpc.o 00:03:22.889 LIB libspdk_notify.a 00:03:22.889 SO libspdk_notify.so.6.0 00:03:22.889 LIB libspdk_keyring.a 00:03:22.889 SYMLINK libspdk_notify.so 00:03:22.889 LIB libspdk_trace.a 00:03:22.889 SO libspdk_keyring.so.1.0 00:03:22.889 SO libspdk_trace.so.10.0 00:03:23.146 SYMLINK libspdk_keyring.so 00:03:23.146 SYMLINK libspdk_trace.so 00:03:23.146 LIB libspdk_env_dpdk.a 00:03:23.146 SO libspdk_env_dpdk.so.14.0 00:03:23.146 CC lib/sock/sock.o 00:03:23.146 CC lib/sock/sock_rpc.o 00:03:23.146 CC lib/thread/thread.o 00:03:23.146 CC lib/thread/iobuf.o 00:03:23.404 SYMLINK libspdk_env_dpdk.so 00:03:23.661 LIB libspdk_sock.a 00:03:23.661 SO libspdk_sock.so.9.0 00:03:23.661 SYMLINK libspdk_sock.so 00:03:23.919 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.919 CC lib/nvme/nvme_ctrlr.o 00:03:23.919 CC lib/nvme/nvme_fabric.o 00:03:23.919 CC lib/nvme/nvme_ns_cmd.o 00:03:23.919 CC lib/nvme/nvme_ns.o 00:03:23.919 CC lib/nvme/nvme_pcie_common.o 00:03:23.919 CC lib/nvme/nvme_pcie.o 00:03:23.919 CC lib/nvme/nvme_qpair.o 00:03:23.919 CC lib/nvme/nvme.o 00:03:23.919 CC lib/nvme/nvme_quirks.o 00:03:23.919 CC lib/nvme/nvme_transport.o 00:03:23.919 CC lib/nvme/nvme_discovery.o 00:03:23.919 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:23.919 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:23.919 CC lib/nvme/nvme_tcp.o 00:03:23.919 CC lib/nvme/nvme_opal.o 00:03:23.919 CC lib/nvme/nvme_io_msg.o 00:03:23.919 CC lib/nvme/nvme_poll_group.o 00:03:23.919 CC lib/nvme/nvme_zns.o 00:03:23.919 CC lib/nvme/nvme_stubs.o 00:03:23.919 CC lib/nvme/nvme_auth.o 00:03:23.919 CC lib/nvme/nvme_cuse.o 00:03:23.919 CC lib/nvme/nvme_vfio_user.o 00:03:23.919 CC lib/nvme/nvme_rdma.o 00:03:24.855 LIB libspdk_thread.a 00:03:24.855 SO libspdk_thread.so.10.0 00:03:24.855 SYMLINK libspdk_thread.so 00:03:25.113 CC lib/accel/accel.o 00:03:25.113 CC lib/accel/accel_rpc.o 00:03:25.113 CC lib/init/json_config.o 00:03:25.113 CC lib/blob/blobstore.o 00:03:25.113 CC lib/accel/accel_sw.o 00:03:25.113 CC lib/init/subsystem.o 00:03:25.113 CC lib/blob/request.o 00:03:25.113 CC lib/init/subsystem_rpc.o 00:03:25.113 CC lib/blob/zeroes.o 00:03:25.113 CC lib/vfu_tgt/tgt_endpoint.o 00:03:25.113 CC lib/virtio/virtio.o 00:03:25.113 CC lib/blob/blob_bs_dev.o 00:03:25.113 CC lib/init/rpc.o 00:03:25.113 CC lib/vfu_tgt/tgt_rpc.o 00:03:25.113 CC lib/virtio/virtio_vhost_user.o 00:03:25.113 CC lib/virtio/virtio_vfio_user.o 00:03:25.113 CC lib/virtio/virtio_pci.o 00:03:25.371 LIB libspdk_init.a 00:03:25.371 SO libspdk_init.so.5.0 00:03:25.371 LIB libspdk_virtio.a 00:03:25.371 LIB libspdk_vfu_tgt.a 00:03:25.372 SYMLINK libspdk_init.so 00:03:25.372 SO libspdk_virtio.so.7.0 00:03:25.372 SO libspdk_vfu_tgt.so.3.0 00:03:25.372 SYMLINK libspdk_vfu_tgt.so 00:03:25.372 SYMLINK libspdk_virtio.so 00:03:25.630 CC lib/event/app.o 00:03:25.630 CC lib/event/reactor.o 00:03:25.630 CC lib/event/log_rpc.o 00:03:25.630 CC lib/event/app_rpc.o 00:03:25.630 CC lib/event/scheduler_static.o 00:03:25.887 LIB libspdk_event.a 00:03:25.887 SO libspdk_event.so.13.0 00:03:26.145 SYMLINK libspdk_event.so 00:03:26.145 LIB libspdk_accel.a 00:03:26.145 SO libspdk_accel.so.15.0 00:03:26.145 SYMLINK libspdk_accel.so 00:03:26.145 LIB libspdk_nvme.a 00:03:26.404 CC lib/bdev/bdev.o 00:03:26.404 CC lib/bdev/bdev_rpc.o 00:03:26.404 CC lib/bdev/bdev_zone.o 00:03:26.404 CC lib/bdev/part.o 00:03:26.404 CC lib/bdev/scsi_nvme.o 00:03:26.404 SO libspdk_nvme.so.13.0 00:03:26.662 SYMLINK libspdk_nvme.so 00:03:28.034 LIB libspdk_blob.a 00:03:28.034 SO libspdk_blob.so.11.0 00:03:28.034 SYMLINK libspdk_blob.so 00:03:28.292 CC lib/blobfs/blobfs.o 00:03:28.292 CC lib/blobfs/tree.o 00:03:28.292 CC lib/lvol/lvol.o 00:03:28.858 LIB libspdk_bdev.a 00:03:28.858 SO libspdk_bdev.so.15.0 00:03:29.124 SYMLINK libspdk_bdev.so 00:03:29.124 LIB libspdk_blobfs.a 00:03:29.124 SO libspdk_blobfs.so.10.0 00:03:29.124 SYMLINK libspdk_blobfs.so 00:03:29.124 CC lib/nvmf/ctrlr.o 00:03:29.124 CC lib/ublk/ublk.o 00:03:29.124 CC lib/ublk/ublk_rpc.o 00:03:29.124 CC lib/ftl/ftl_core.o 00:03:29.124 CC lib/nvmf/ctrlr_discovery.o 00:03:29.124 CC lib/scsi/dev.o 00:03:29.124 CC lib/nbd/nbd.o 00:03:29.124 CC lib/ftl/ftl_init.o 00:03:29.124 CC lib/nvmf/ctrlr_bdev.o 00:03:29.124 CC lib/scsi/lun.o 00:03:29.124 CC lib/nbd/nbd_rpc.o 00:03:29.124 CC lib/ftl/ftl_layout.o 00:03:29.124 CC lib/nvmf/subsystem.o 00:03:29.124 CC lib/scsi/port.o 00:03:29.124 CC lib/ftl/ftl_debug.o 00:03:29.124 CC lib/nvmf/nvmf.o 00:03:29.124 CC lib/ftl/ftl_io.o 00:03:29.124 CC lib/scsi/scsi.o 00:03:29.124 CC lib/scsi/scsi_bdev.o 00:03:29.124 CC lib/nvmf/nvmf_rpc.o 00:03:29.124 CC lib/nvmf/transport.o 00:03:29.124 CC lib/ftl/ftl_sb.o 00:03:29.124 CC lib/nvmf/tcp.o 00:03:29.124 CC lib/scsi/scsi_pr.o 00:03:29.124 CC lib/ftl/ftl_l2p.o 00:03:29.124 CC lib/scsi/scsi_rpc.o 00:03:29.124 CC lib/ftl/ftl_l2p_flat.o 00:03:29.124 CC lib/nvmf/stubs.o 00:03:29.124 CC lib/scsi/task.o 00:03:29.124 CC lib/nvmf/mdns_server.o 00:03:29.124 CC lib/ftl/ftl_nv_cache.o 00:03:29.124 CC lib/nvmf/vfio_user.o 00:03:29.124 CC lib/ftl/ftl_band.o 00:03:29.124 CC lib/ftl/ftl_band_ops.o 00:03:29.124 CC lib/nvmf/rdma.o 00:03:29.124 CC lib/nvmf/auth.o 00:03:29.124 CC lib/ftl/ftl_writer.o 00:03:29.124 CC lib/ftl/ftl_rq.o 00:03:29.124 CC lib/ftl/ftl_reloc.o 00:03:29.124 CC lib/ftl/ftl_l2p_cache.o 00:03:29.124 CC lib/ftl/ftl_p2l.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.124 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.124 LIB libspdk_lvol.a 00:03:29.384 SO libspdk_lvol.so.10.0 00:03:29.384 SYMLINK libspdk_lvol.so 00:03:29.384 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.645 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.645 CC lib/ftl/utils/ftl_conf.o 00:03:29.645 CC lib/ftl/utils/ftl_md.o 00:03:29.645 CC lib/ftl/utils/ftl_mempool.o 00:03:29.645 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.645 CC lib/ftl/utils/ftl_property.o 00:03:29.645 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.645 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.645 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.645 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.645 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.645 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.645 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.903 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.903 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.903 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.903 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.903 CC lib/ftl/base/ftl_base_dev.o 00:03:29.903 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.903 CC lib/ftl/ftl_trace.o 00:03:29.903 LIB libspdk_nbd.a 00:03:30.162 SO libspdk_nbd.so.7.0 00:03:30.162 SYMLINK libspdk_nbd.so 00:03:30.162 LIB libspdk_scsi.a 00:03:30.162 SO libspdk_scsi.so.9.0 00:03:30.162 LIB libspdk_ublk.a 00:03:30.162 SO libspdk_ublk.so.3.0 00:03:30.419 SYMLINK libspdk_scsi.so 00:03:30.419 SYMLINK libspdk_ublk.so 00:03:30.419 CC lib/vhost/vhost.o 00:03:30.419 CC lib/iscsi/conn.o 00:03:30.419 CC lib/vhost/vhost_rpc.o 00:03:30.419 CC lib/iscsi/init_grp.o 00:03:30.419 CC lib/vhost/vhost_scsi.o 00:03:30.419 CC lib/iscsi/iscsi.o 00:03:30.419 CC lib/vhost/vhost_blk.o 00:03:30.419 CC lib/iscsi/md5.o 00:03:30.419 CC lib/vhost/rte_vhost_user.o 00:03:30.419 CC lib/iscsi/param.o 00:03:30.419 CC lib/iscsi/portal_grp.o 00:03:30.419 CC lib/iscsi/tgt_node.o 00:03:30.419 CC lib/iscsi/iscsi_subsystem.o 00:03:30.419 CC lib/iscsi/iscsi_rpc.o 00:03:30.419 CC lib/iscsi/task.o 00:03:30.677 LIB libspdk_ftl.a 00:03:30.935 SO libspdk_ftl.so.9.0 00:03:31.193 SYMLINK libspdk_ftl.so 00:03:31.760 LIB libspdk_vhost.a 00:03:31.760 SO libspdk_vhost.so.8.0 00:03:31.760 LIB libspdk_nvmf.a 00:03:31.760 SO libspdk_nvmf.so.18.0 00:03:31.760 SYMLINK libspdk_vhost.so 00:03:32.018 LIB libspdk_iscsi.a 00:03:32.018 SO libspdk_iscsi.so.8.0 00:03:32.018 SYMLINK libspdk_nvmf.so 00:03:32.018 SYMLINK libspdk_iscsi.so 00:03:32.277 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.277 CC module/vfu_device/vfu_virtio.o 00:03:32.277 CC module/vfu_device/vfu_virtio_blk.o 00:03:32.277 CC module/vfu_device/vfu_virtio_scsi.o 00:03:32.277 CC module/vfu_device/vfu_virtio_rpc.o 00:03:32.535 CC module/keyring/file/keyring.o 00:03:32.535 CC module/accel/ioat/accel_ioat.o 00:03:32.535 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.535 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.535 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.535 CC module/blob/bdev/blob_bdev.o 00:03:32.535 CC module/accel/iaa/accel_iaa.o 00:03:32.535 CC module/accel/dsa/accel_dsa.o 00:03:32.535 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.535 CC module/keyring/linux/keyring.o 00:03:32.535 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.535 CC module/accel/error/accel_error.o 00:03:32.535 CC module/sock/posix/posix.o 00:03:32.535 CC module/keyring/linux/keyring_rpc.o 00:03:32.535 CC module/keyring/file/keyring_rpc.o 00:03:32.535 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.535 CC module/accel/error/accel_error_rpc.o 00:03:32.535 LIB libspdk_env_dpdk_rpc.a 00:03:32.535 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.535 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.535 LIB libspdk_keyring_linux.a 00:03:32.535 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.535 LIB libspdk_scheduler_gscheduler.a 00:03:32.535 LIB libspdk_keyring_file.a 00:03:32.535 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:32.535 SO libspdk_keyring_linux.so.1.0 00:03:32.535 SO libspdk_scheduler_gscheduler.so.4.0 00:03:32.793 LIB libspdk_accel_error.a 00:03:32.794 SO libspdk_keyring_file.so.1.0 00:03:32.794 LIB libspdk_accel_ioat.a 00:03:32.794 LIB libspdk_scheduler_dynamic.a 00:03:32.794 LIB libspdk_accel_iaa.a 00:03:32.794 SO libspdk_accel_error.so.2.0 00:03:32.794 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.794 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.794 SO libspdk_accel_ioat.so.6.0 00:03:32.794 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.794 SYMLINK libspdk_keyring_linux.so 00:03:32.794 SO libspdk_accel_iaa.so.3.0 00:03:32.794 SYMLINK libspdk_keyring_file.so 00:03:32.794 LIB libspdk_accel_dsa.a 00:03:32.794 SYMLINK libspdk_accel_error.so 00:03:32.794 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.794 LIB libspdk_blob_bdev.a 00:03:32.794 SYMLINK libspdk_accel_ioat.so 00:03:32.794 SO libspdk_accel_dsa.so.5.0 00:03:32.794 SO libspdk_blob_bdev.so.11.0 00:03:32.794 SYMLINK libspdk_accel_iaa.so 00:03:32.794 SYMLINK libspdk_blob_bdev.so 00:03:32.794 SYMLINK libspdk_accel_dsa.so 00:03:33.053 LIB libspdk_vfu_device.a 00:03:33.053 SO libspdk_vfu_device.so.3.0 00:03:33.053 CC module/bdev/aio/bdev_aio.o 00:03:33.053 CC module/bdev/gpt/gpt.o 00:03:33.053 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.053 CC module/bdev/raid/bdev_raid.o 00:03:33.053 CC module/bdev/delay/vbdev_delay.o 00:03:33.053 CC module/bdev/malloc/bdev_malloc.o 00:03:33.053 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.053 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.053 CC module/bdev/error/vbdev_error.o 00:03:33.053 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.053 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.053 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.053 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.053 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.053 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.053 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.053 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.053 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:33.053 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.053 CC module/bdev/ftl/bdev_ftl.o 00:03:33.053 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.053 CC module/bdev/split/vbdev_split.o 00:03:33.053 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.053 CC module/bdev/passthru/vbdev_passthru.o 00:03:33.053 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.053 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.053 CC module/bdev/raid/raid0.o 00:03:33.053 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.053 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.053 CC module/bdev/raid/raid1.o 00:03:33.053 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:33.053 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.053 CC module/bdev/null/bdev_null.o 00:03:33.053 CC module/bdev/raid/concat.o 00:03:33.053 CC module/bdev/null/bdev_null_rpc.o 00:03:33.053 CC module/bdev/nvme/bdev_nvme.o 00:03:33.053 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.053 CC module/bdev/nvme/nvme_rpc.o 00:03:33.053 CC module/bdev/nvme/bdev_mdns_client.o 00:03:33.053 CC module/bdev/nvme/vbdev_opal.o 00:03:33.053 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.053 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.312 SYMLINK libspdk_vfu_device.so 00:03:33.312 LIB libspdk_sock_posix.a 00:03:33.312 SO libspdk_sock_posix.so.6.0 00:03:33.312 SYMLINK libspdk_sock_posix.so 00:03:33.569 LIB libspdk_bdev_split.a 00:03:33.569 LIB libspdk_blobfs_bdev.a 00:03:33.569 SO libspdk_bdev_split.so.6.0 00:03:33.569 SO libspdk_blobfs_bdev.so.6.0 00:03:33.569 SYMLINK libspdk_bdev_split.so 00:03:33.569 SYMLINK libspdk_blobfs_bdev.so 00:03:33.569 LIB libspdk_bdev_null.a 00:03:33.569 LIB libspdk_bdev_gpt.a 00:03:33.569 LIB libspdk_bdev_ftl.a 00:03:33.569 SO libspdk_bdev_null.so.6.0 00:03:33.569 LIB libspdk_bdev_error.a 00:03:33.569 SO libspdk_bdev_gpt.so.6.0 00:03:33.569 SO libspdk_bdev_ftl.so.6.0 00:03:33.569 LIB libspdk_bdev_aio.a 00:03:33.569 LIB libspdk_bdev_zone_block.a 00:03:33.569 SO libspdk_bdev_error.so.6.0 00:03:33.569 LIB libspdk_bdev_passthru.a 00:03:33.569 SO libspdk_bdev_aio.so.6.0 00:03:33.569 SYMLINK libspdk_bdev_null.so 00:03:33.569 SO libspdk_bdev_passthru.so.6.0 00:03:33.569 SO libspdk_bdev_zone_block.so.6.0 00:03:33.569 LIB libspdk_bdev_delay.a 00:03:33.569 SYMLINK libspdk_bdev_gpt.so 00:03:33.569 SYMLINK libspdk_bdev_ftl.so 00:03:33.827 SYMLINK libspdk_bdev_error.so 00:03:33.828 SO libspdk_bdev_delay.so.6.0 00:03:33.828 LIB libspdk_bdev_malloc.a 00:03:33.828 SYMLINK libspdk_bdev_aio.so 00:03:33.828 SYMLINK libspdk_bdev_passthru.so 00:03:33.828 SYMLINK libspdk_bdev_zone_block.so 00:03:33.828 LIB libspdk_bdev_iscsi.a 00:03:33.828 SO libspdk_bdev_malloc.so.6.0 00:03:33.828 SO libspdk_bdev_iscsi.so.6.0 00:03:33.828 SYMLINK libspdk_bdev_delay.so 00:03:33.828 LIB libspdk_bdev_lvol.a 00:03:33.828 SYMLINK libspdk_bdev_malloc.so 00:03:33.828 SYMLINK libspdk_bdev_iscsi.so 00:03:33.828 LIB libspdk_bdev_virtio.a 00:03:33.828 SO libspdk_bdev_lvol.so.6.0 00:03:33.828 SO libspdk_bdev_virtio.so.6.0 00:03:33.828 SYMLINK libspdk_bdev_lvol.so 00:03:33.828 SYMLINK libspdk_bdev_virtio.so 00:03:34.086 LIB libspdk_bdev_raid.a 00:03:34.086 SO libspdk_bdev_raid.so.6.0 00:03:34.345 SYMLINK libspdk_bdev_raid.so 00:03:35.722 LIB libspdk_bdev_nvme.a 00:03:35.722 SO libspdk_bdev_nvme.so.7.0 00:03:35.722 SYMLINK libspdk_bdev_nvme.so 00:03:35.981 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.981 CC module/event/subsystems/vmd/vmd.o 00:03:35.981 CC module/event/subsystems/sock/sock.o 00:03:35.981 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:35.981 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.981 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.981 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.981 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.981 CC module/event/subsystems/keyring/keyring.o 00:03:35.981 LIB libspdk_event_keyring.a 00:03:35.981 LIB libspdk_event_sock.a 00:03:35.981 LIB libspdk_event_vhost_blk.a 00:03:35.981 LIB libspdk_event_vmd.a 00:03:35.981 LIB libspdk_event_scheduler.a 00:03:35.981 LIB libspdk_event_vfu_tgt.a 00:03:35.981 SO libspdk_event_keyring.so.1.0 00:03:35.981 LIB libspdk_event_iobuf.a 00:03:35.981 SO libspdk_event_sock.so.5.0 00:03:35.981 SO libspdk_event_vfu_tgt.so.3.0 00:03:35.981 SO libspdk_event_scheduler.so.4.0 00:03:35.981 SO libspdk_event_vhost_blk.so.3.0 00:03:35.981 SO libspdk_event_vmd.so.6.0 00:03:36.240 SO libspdk_event_iobuf.so.3.0 00:03:36.240 SYMLINK libspdk_event_keyring.so 00:03:36.240 SYMLINK libspdk_event_sock.so 00:03:36.240 SYMLINK libspdk_event_vfu_tgt.so 00:03:36.240 SYMLINK libspdk_event_vhost_blk.so 00:03:36.240 SYMLINK libspdk_event_scheduler.so 00:03:36.240 SYMLINK libspdk_event_vmd.so 00:03:36.240 SYMLINK libspdk_event_iobuf.so 00:03:36.240 CC module/event/subsystems/accel/accel.o 00:03:36.498 LIB libspdk_event_accel.a 00:03:36.498 SO libspdk_event_accel.so.6.0 00:03:36.498 SYMLINK libspdk_event_accel.so 00:03:36.756 CC module/event/subsystems/bdev/bdev.o 00:03:37.014 LIB libspdk_event_bdev.a 00:03:37.014 SO libspdk_event_bdev.so.6.0 00:03:37.014 SYMLINK libspdk_event_bdev.so 00:03:37.272 CC module/event/subsystems/nbd/nbd.o 00:03:37.272 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.272 CC module/event/subsystems/ublk/ublk.o 00:03:37.272 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.272 CC module/event/subsystems/scsi/scsi.o 00:03:37.272 LIB libspdk_event_nbd.a 00:03:37.272 LIB libspdk_event_ublk.a 00:03:37.272 LIB libspdk_event_scsi.a 00:03:37.272 SO libspdk_event_nbd.so.6.0 00:03:37.272 SO libspdk_event_ublk.so.3.0 00:03:37.272 SO libspdk_event_scsi.so.6.0 00:03:37.530 SYMLINK libspdk_event_nbd.so 00:03:37.530 SYMLINK libspdk_event_ublk.so 00:03:37.530 SYMLINK libspdk_event_scsi.so 00:03:37.530 LIB libspdk_event_nvmf.a 00:03:37.530 SO libspdk_event_nvmf.so.6.0 00:03:37.530 SYMLINK libspdk_event_nvmf.so 00:03:37.530 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.530 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.787 LIB libspdk_event_vhost_scsi.a 00:03:37.787 LIB libspdk_event_iscsi.a 00:03:37.787 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.787 SO libspdk_event_iscsi.so.6.0 00:03:37.787 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.787 SYMLINK libspdk_event_iscsi.so 00:03:38.056 SO libspdk.so.6.0 00:03:38.056 SYMLINK libspdk.so 00:03:38.056 TEST_HEADER include/spdk/accel.h 00:03:38.056 CC app/spdk_nvme_identify/identify.o 00:03:38.056 CC app/spdk_lspci/spdk_lspci.o 00:03:38.056 CC test/rpc_client/rpc_client_test.o 00:03:38.056 CC app/spdk_nvme_perf/perf.o 00:03:38.056 CXX app/trace/trace.o 00:03:38.056 TEST_HEADER include/spdk/accel_module.h 00:03:38.056 CC app/spdk_top/spdk_top.o 00:03:38.056 TEST_HEADER include/spdk/assert.h 00:03:38.056 TEST_HEADER include/spdk/barrier.h 00:03:38.056 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.324 CC app/trace_record/trace_record.o 00:03:38.324 TEST_HEADER include/spdk/base64.h 00:03:38.324 TEST_HEADER include/spdk/bdev.h 00:03:38.324 TEST_HEADER include/spdk/bdev_module.h 00:03:38.324 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.324 TEST_HEADER include/spdk/bit_array.h 00:03:38.324 TEST_HEADER include/spdk/bit_pool.h 00:03:38.324 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.324 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.324 TEST_HEADER include/spdk/blobfs.h 00:03:38.324 TEST_HEADER include/spdk/blob.h 00:03:38.324 TEST_HEADER include/spdk/conf.h 00:03:38.324 TEST_HEADER include/spdk/config.h 00:03:38.324 TEST_HEADER include/spdk/cpuset.h 00:03:38.324 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.324 TEST_HEADER include/spdk/crc16.h 00:03:38.324 TEST_HEADER include/spdk/crc32.h 00:03:38.324 CC app/spdk_dd/spdk_dd.o 00:03:38.324 TEST_HEADER include/spdk/crc64.h 00:03:38.324 TEST_HEADER include/spdk/dif.h 00:03:38.324 TEST_HEADER include/spdk/dma.h 00:03:38.324 TEST_HEADER include/spdk/endian.h 00:03:38.324 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.324 TEST_HEADER include/spdk/env.h 00:03:38.324 TEST_HEADER include/spdk/event.h 00:03:38.324 TEST_HEADER include/spdk/fd_group.h 00:03:38.324 CC app/nvmf_tgt/nvmf_main.o 00:03:38.324 TEST_HEADER include/spdk/fd.h 00:03:38.324 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.324 CC app/vhost/vhost.o 00:03:38.324 TEST_HEADER include/spdk/file.h 00:03:38.324 TEST_HEADER include/spdk/ftl.h 00:03:38.324 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.324 TEST_HEADER include/spdk/hexlify.h 00:03:38.324 TEST_HEADER include/spdk/histogram_data.h 00:03:38.324 TEST_HEADER include/spdk/idxd.h 00:03:38.324 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.324 TEST_HEADER include/spdk/init.h 00:03:38.324 TEST_HEADER include/spdk/ioat.h 00:03:38.324 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.324 CC examples/nvme/hotplug/hotplug.o 00:03:38.324 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.324 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.324 CC examples/idxd/perf/perf.o 00:03:38.324 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.324 TEST_HEADER include/spdk/json.h 00:03:38.324 CC examples/nvme/reconnect/reconnect.o 00:03:38.324 CC examples/nvme/hello_world/hello_world.o 00:03:38.324 CC examples/nvme/arbitration/arbitration.o 00:03:38.324 CC test/event/event_perf/event_perf.o 00:03:38.324 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.324 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.324 CC examples/ioat/perf/perf.o 00:03:38.324 CC examples/ioat/verify/verify.o 00:03:38.324 CC test/event/reactor/reactor.o 00:03:38.324 CC examples/accel/perf/accel_perf.o 00:03:38.324 CC test/nvme/aer/aer.o 00:03:38.324 CC examples/sock/hello_world/hello_sock.o 00:03:38.324 TEST_HEADER include/spdk/keyring.h 00:03:38.324 TEST_HEADER include/spdk/keyring_module.h 00:03:38.324 TEST_HEADER include/spdk/likely.h 00:03:38.324 TEST_HEADER include/spdk/log.h 00:03:38.324 CC examples/nvme/abort/abort.o 00:03:38.324 CC app/spdk_tgt/spdk_tgt.o 00:03:38.324 CC examples/util/zipf/zipf.o 00:03:38.324 TEST_HEADER include/spdk/lvol.h 00:03:38.324 TEST_HEADER include/spdk/memory.h 00:03:38.324 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.324 CC app/fio/nvme/fio_plugin.o 00:03:38.324 TEST_HEADER include/spdk/mmio.h 00:03:38.324 TEST_HEADER include/spdk/nbd.h 00:03:38.324 CC test/thread/poller_perf/poller_perf.o 00:03:38.324 TEST_HEADER include/spdk/notify.h 00:03:38.324 TEST_HEADER include/spdk/nvme.h 00:03:38.324 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.324 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.324 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.324 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.324 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.324 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.324 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.324 TEST_HEADER include/spdk/nvmf.h 00:03:38.324 CC test/dma/test_dma/test_dma.o 00:03:38.324 CC examples/blob/hello_world/hello_blob.o 00:03:38.324 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.324 CC test/blobfs/mkfs/mkfs.o 00:03:38.324 CC examples/blob/cli/blobcli.o 00:03:38.324 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.324 CC test/app/bdev_svc/bdev_svc.o 00:03:38.324 CC examples/nvmf/nvmf/nvmf.o 00:03:38.324 TEST_HEADER include/spdk/opal.h 00:03:38.324 CC test/bdev/bdevio/bdevio.o 00:03:38.324 TEST_HEADER include/spdk/opal_spec.h 00:03:38.324 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.324 CC test/accel/dif/dif.o 00:03:38.324 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.324 TEST_HEADER include/spdk/pci_ids.h 00:03:38.324 CC examples/thread/thread/thread_ex.o 00:03:38.324 TEST_HEADER include/spdk/pipe.h 00:03:38.324 TEST_HEADER include/spdk/queue.h 00:03:38.324 TEST_HEADER include/spdk/reduce.h 00:03:38.324 TEST_HEADER include/spdk/rpc.h 00:03:38.324 TEST_HEADER include/spdk/scheduler.h 00:03:38.324 TEST_HEADER include/spdk/scsi.h 00:03:38.324 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.587 TEST_HEADER include/spdk/sock.h 00:03:38.587 TEST_HEADER include/spdk/stdinc.h 00:03:38.587 TEST_HEADER include/spdk/string.h 00:03:38.587 TEST_HEADER include/spdk/thread.h 00:03:38.587 TEST_HEADER include/spdk/trace.h 00:03:38.587 TEST_HEADER include/spdk/trace_parser.h 00:03:38.587 TEST_HEADER include/spdk/tree.h 00:03:38.587 TEST_HEADER include/spdk/ublk.h 00:03:38.587 TEST_HEADER include/spdk/util.h 00:03:38.588 TEST_HEADER include/spdk/uuid.h 00:03:38.588 LINK spdk_lspci 00:03:38.588 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.588 TEST_HEADER include/spdk/version.h 00:03:38.588 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.588 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.588 TEST_HEADER include/spdk/vhost.h 00:03:38.588 TEST_HEADER include/spdk/vmd.h 00:03:38.588 CC test/lvol/esnap/esnap.o 00:03:38.588 TEST_HEADER include/spdk/xor.h 00:03:38.588 TEST_HEADER include/spdk/zipf.h 00:03:38.588 CXX test/cpp_headers/accel.o 00:03:38.588 LINK rpc_client_test 00:03:38.588 LINK spdk_nvme_discover 00:03:38.588 LINK interrupt_tgt 00:03:38.588 LINK lsvmd 00:03:38.588 LINK reactor 00:03:38.588 LINK nvmf_tgt 00:03:38.588 LINK event_perf 00:03:38.588 LINK vhost 00:03:38.588 LINK zipf 00:03:38.588 LINK poller_perf 00:03:38.588 LINK pmr_persistence 00:03:38.588 LINK spdk_trace_record 00:03:38.588 LINK cmb_copy 00:03:38.865 LINK iscsi_tgt 00:03:38.865 LINK ioat_perf 00:03:38.865 LINK verify 00:03:38.865 LINK spdk_tgt 00:03:38.865 LINK bdev_svc 00:03:38.865 LINK hotplug 00:03:38.865 LINK hello_world 00:03:38.865 LINK mkfs 00:03:38.865 LINK hello_sock 00:03:38.865 LINK hello_blob 00:03:38.865 LINK hello_bdev 00:03:38.865 CXX test/cpp_headers/accel_module.o 00:03:38.865 LINK aer 00:03:38.865 LINK thread 00:03:38.865 LINK spdk_dd 00:03:39.149 LINK arbitration 00:03:39.149 LINK idxd_perf 00:03:39.149 LINK reconnect 00:03:39.149 LINK nvmf 00:03:39.150 LINK spdk_trace 00:03:39.150 CXX test/cpp_headers/assert.o 00:03:39.150 LINK test_dma 00:03:39.150 LINK abort 00:03:39.150 CC test/nvme/reset/reset.o 00:03:39.150 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.150 CC test/env/vtophys/vtophys.o 00:03:39.150 CC examples/vmd/led/led.o 00:03:39.150 LINK bdevio 00:03:39.150 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.150 CC app/fio/bdev/fio_plugin.o 00:03:39.150 CC test/event/reactor_perf/reactor_perf.o 00:03:39.150 CXX test/cpp_headers/barrier.o 00:03:39.463 CXX test/cpp_headers/base64.o 00:03:39.463 LINK nvme_manage 00:03:39.463 CC test/nvme/sgl/sgl.o 00:03:39.463 LINK dif 00:03:39.463 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.463 CC test/event/app_repeat/app_repeat.o 00:03:39.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.463 CC test/env/pci/pci_ut.o 00:03:39.463 CC test/env/memory/memory_ut.o 00:03:39.463 CXX test/cpp_headers/bdev.o 00:03:39.463 CXX test/cpp_headers/bdev_module.o 00:03:39.463 CC test/app/histogram_perf/histogram_perf.o 00:03:39.463 CXX test/cpp_headers/bdev_zone.o 00:03:39.463 LINK accel_perf 00:03:39.463 CXX test/cpp_headers/bit_array.o 00:03:39.463 LINK blobcli 00:03:39.463 CC test/app/stub/stub.o 00:03:39.463 CC test/app/jsoncat/jsoncat.o 00:03:39.463 CXX test/cpp_headers/bit_pool.o 00:03:39.463 CXX test/cpp_headers/blob_bdev.o 00:03:39.463 LINK spdk_nvme 00:03:39.463 CC test/event/scheduler/scheduler.o 00:03:39.463 LINK vtophys 00:03:39.463 LINK led 00:03:39.463 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.463 CXX test/cpp_headers/blobfs.o 00:03:39.463 LINK reactor_perf 00:03:39.746 CC test/nvme/e2edp/nvme_dp.o 00:03:39.746 CXX test/cpp_headers/blob.o 00:03:39.746 CXX test/cpp_headers/conf.o 00:03:39.746 CXX test/cpp_headers/config.o 00:03:39.746 CC test/nvme/overhead/overhead.o 00:03:39.746 CC test/nvme/startup/startup.o 00:03:39.746 CC test/nvme/reserve/reserve.o 00:03:39.746 CXX test/cpp_headers/cpuset.o 00:03:39.746 CC test/nvme/err_injection/err_injection.o 00:03:39.746 LINK reset 00:03:39.746 LINK env_dpdk_post_init 00:03:39.746 LINK app_repeat 00:03:39.746 LINK mem_callbacks 00:03:39.746 CC test/nvme/simple_copy/simple_copy.o 00:03:39.746 CXX test/cpp_headers/crc16.o 00:03:39.746 LINK histogram_perf 00:03:39.746 CXX test/cpp_headers/crc32.o 00:03:39.746 LINK spdk_nvme_perf 00:03:39.746 CC test/nvme/connect_stress/connect_stress.o 00:03:39.746 CXX test/cpp_headers/crc64.o 00:03:39.746 LINK jsoncat 00:03:39.746 CXX test/cpp_headers/dif.o 00:03:39.746 CC test/nvme/boot_partition/boot_partition.o 00:03:39.746 LINK stub 00:03:39.746 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.746 CC test/nvme/compliance/nvme_compliance.o 00:03:39.746 LINK sgl 00:03:39.746 CXX test/cpp_headers/dma.o 00:03:39.746 CXX test/cpp_headers/endian.o 00:03:39.746 CXX test/cpp_headers/env_dpdk.o 00:03:40.011 CXX test/cpp_headers/env.o 00:03:40.011 CXX test/cpp_headers/event.o 00:03:40.011 CXX test/cpp_headers/fd_group.o 00:03:40.011 LINK bdevperf 00:03:40.011 CXX test/cpp_headers/fd.o 00:03:40.011 LINK spdk_nvme_identify 00:03:40.011 CXX test/cpp_headers/file.o 00:03:40.011 CXX test/cpp_headers/ftl.o 00:03:40.011 CXX test/cpp_headers/gpt_spec.o 00:03:40.011 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.011 LINK spdk_top 00:03:40.011 LINK scheduler 00:03:40.011 CC test/nvme/fdp/fdp.o 00:03:40.011 LINK nvme_fuzz 00:03:40.011 CXX test/cpp_headers/hexlify.o 00:03:40.011 CC test/nvme/cuse/cuse.o 00:03:40.011 CXX test/cpp_headers/histogram_data.o 00:03:40.011 LINK startup 00:03:40.011 CXX test/cpp_headers/idxd.o 00:03:40.011 CXX test/cpp_headers/idxd_spec.o 00:03:40.011 CXX test/cpp_headers/init.o 00:03:40.011 LINK err_injection 00:03:40.011 CXX test/cpp_headers/ioat.o 00:03:40.011 CXX test/cpp_headers/ioat_spec.o 00:03:40.011 LINK reserve 00:03:40.011 CXX test/cpp_headers/iscsi_spec.o 00:03:40.011 CXX test/cpp_headers/json.o 00:03:40.011 CXX test/cpp_headers/jsonrpc.o 00:03:40.011 CXX test/cpp_headers/keyring.o 00:03:40.274 LINK vhost_fuzz 00:03:40.274 LINK pci_ut 00:03:40.274 LINK connect_stress 00:03:40.274 CXX test/cpp_headers/keyring_module.o 00:03:40.274 LINK boot_partition 00:03:40.274 LINK simple_copy 00:03:40.274 LINK nvme_dp 00:03:40.274 CXX test/cpp_headers/likely.o 00:03:40.274 LINK spdk_bdev 00:03:40.274 CXX test/cpp_headers/log.o 00:03:40.274 LINK overhead 00:03:40.274 CXX test/cpp_headers/lvol.o 00:03:40.274 CXX test/cpp_headers/memory.o 00:03:40.274 CXX test/cpp_headers/mmio.o 00:03:40.274 CXX test/cpp_headers/nbd.o 00:03:40.274 CXX test/cpp_headers/notify.o 00:03:40.274 CXX test/cpp_headers/nvme.o 00:03:40.274 LINK fused_ordering 00:03:40.274 CXX test/cpp_headers/nvme_intel.o 00:03:40.274 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.274 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:40.274 CXX test/cpp_headers/nvme_spec.o 00:03:40.274 CXX test/cpp_headers/nvme_zns.o 00:03:40.274 CXX test/cpp_headers/nvmf_cmd.o 00:03:40.274 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:40.274 CXX test/cpp_headers/nvmf.o 00:03:40.274 CXX test/cpp_headers/nvmf_spec.o 00:03:40.274 CXX test/cpp_headers/nvmf_transport.o 00:03:40.534 CXX test/cpp_headers/opal.o 00:03:40.534 CXX test/cpp_headers/opal_spec.o 00:03:40.534 CXX test/cpp_headers/pci_ids.o 00:03:40.534 LINK doorbell_aers 00:03:40.534 CXX test/cpp_headers/pipe.o 00:03:40.534 CXX test/cpp_headers/queue.o 00:03:40.534 CXX test/cpp_headers/reduce.o 00:03:40.534 CXX test/cpp_headers/rpc.o 00:03:40.534 CXX test/cpp_headers/scheduler.o 00:03:40.534 CXX test/cpp_headers/scsi.o 00:03:40.534 CXX test/cpp_headers/scsi_spec.o 00:03:40.534 CXX test/cpp_headers/sock.o 00:03:40.534 CXX test/cpp_headers/stdinc.o 00:03:40.534 CXX test/cpp_headers/string.o 00:03:40.534 CXX test/cpp_headers/thread.o 00:03:40.534 CXX test/cpp_headers/trace.o 00:03:40.534 CXX test/cpp_headers/trace_parser.o 00:03:40.534 CXX test/cpp_headers/tree.o 00:03:40.534 LINK nvme_compliance 00:03:40.534 CXX test/cpp_headers/ublk.o 00:03:40.534 CXX test/cpp_headers/util.o 00:03:40.534 CXX test/cpp_headers/uuid.o 00:03:40.534 CXX test/cpp_headers/version.o 00:03:40.534 CXX test/cpp_headers/vfio_user_pci.o 00:03:40.534 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.534 CXX test/cpp_headers/vhost.o 00:03:40.534 CXX test/cpp_headers/vmd.o 00:03:40.534 CXX test/cpp_headers/xor.o 00:03:40.534 CXX test/cpp_headers/zipf.o 00:03:40.791 LINK fdp 00:03:41.355 LINK memory_ut 00:03:41.620 LINK iscsi_fuzz 00:03:41.880 LINK cuse 00:03:44.415 LINK esnap 00:03:44.984 00:03:44.984 real 0m40.344s 00:03:44.984 user 7m38.874s 00:03:44.984 sys 1m50.565s 00:03:44.984 16:02:27 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:44.984 16:02:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:44.984 ************************************ 00:03:44.984 END TEST make 00:03:44.984 ************************************ 00:03:44.984 16:02:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:44.984 16:02:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:44.984 16:02:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:44.984 16:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.984 16:02:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:44.984 16:02:27 -- pm/common@44 -- $ pid=80937 00:03:44.984 16:02:27 -- pm/common@50 -- $ kill -TERM 80937 00:03:44.984 16:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.984 16:02:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:44.984 16:02:27 -- pm/common@44 -- $ pid=80939 00:03:44.984 16:02:27 -- pm/common@50 -- $ kill -TERM 80939 00:03:44.984 16:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.984 16:02:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:44.984 16:02:27 -- pm/common@44 -- $ pid=80941 00:03:44.984 16:02:27 -- pm/common@50 -- $ kill -TERM 80941 00:03:44.984 16:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.984 16:02:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:44.984 16:02:27 -- pm/common@44 -- $ pid=80970 00:03:44.984 16:02:27 -- pm/common@50 -- $ sudo -E kill -TERM 80970 00:03:44.984 16:02:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.984 16:02:27 -- nvmf/common.sh@7 -- # uname -s 00:03:44.984 16:02:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.984 16:02:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.984 16:02:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.984 16:02:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.984 16:02:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.984 16:02:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.984 16:02:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.984 16:02:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.984 16:02:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.984 16:02:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.984 16:02:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:44.984 16:02:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:44.984 16:02:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.984 16:02:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.984 16:02:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:44.984 16:02:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.984 16:02:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.984 16:02:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.984 16:02:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.984 16:02:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.984 16:02:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.984 16:02:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.984 16:02:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.984 16:02:27 -- paths/export.sh@5 -- # export PATH 00:03:44.985 16:02:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.985 16:02:27 -- nvmf/common.sh@47 -- # : 0 00:03:44.985 16:02:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:44.985 16:02:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:44.985 16:02:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.985 16:02:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.985 16:02:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.985 16:02:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:44.985 16:02:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:44.985 16:02:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:44.985 16:02:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.985 16:02:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.985 16:02:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.985 16:02:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.985 16:02:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.985 16:02:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.985 16:02:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:44.985 16:02:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.985 16:02:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.985 16:02:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.985 16:02:27 -- spdk/autotest.sh@48 -- # udevadm_pid=157169 00:03:44.985 16:02:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.985 16:02:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.985 16:02:27 -- pm/common@17 -- # local monitor 00:03:44.985 16:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.985 16:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.985 16:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.985 16:02:27 -- pm/common@21 -- # date +%s 00:03:44.985 16:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.985 16:02:27 -- pm/common@21 -- # date +%s 00:03:44.985 16:02:27 -- pm/common@25 -- # sleep 1 00:03:44.985 16:02:27 -- pm/common@21 -- # date +%s 00:03:44.985 16:02:27 -- pm/common@21 -- # date +%s 00:03:44.985 16:02:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052147 00:03:44.985 16:02:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052147 00:03:44.985 16:02:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052147 00:03:44.985 16:02:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721052147 00:03:44.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052147_collect-vmstat.pm.log 00:03:44.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052147_collect-cpu-load.pm.log 00:03:44.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052147_collect-cpu-temp.pm.log 00:03:44.985 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721052147_collect-bmc-pm.bmc.pm.log 00:03:45.923 16:02:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.923 16:02:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.923 16:02:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:45.923 16:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.923 16:02:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.923 16:02:28 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:45.923 16:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.923 16:02:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:45.923 16:02:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.923 16:02:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.923 16:02:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.923 16:02:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.923 16:02:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.923 16:02:28 -- common/autotest_common.sh@1451 -- # uname 00:03:45.923 16:02:28 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:45.923 16:02:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.923 16:02:28 -- common/autotest_common.sh@1471 -- # uname 00:03:45.923 16:02:28 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:45.923 16:02:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:45.923 16:02:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:45.923 16:02:28 -- spdk/autotest.sh@72 -- # hash lcov 00:03:45.923 16:02:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:45.923 16:02:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:45.923 --rc lcov_branch_coverage=1 00:03:45.923 --rc lcov_function_coverage=1 00:03:45.923 --rc genhtml_branch_coverage=1 00:03:45.923 --rc genhtml_function_coverage=1 00:03:45.923 --rc genhtml_legend=1 00:03:45.923 --rc geninfo_all_blocks=1 00:03:45.923 ' 00:03:45.923 16:02:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:45.923 --rc lcov_branch_coverage=1 00:03:45.923 --rc lcov_function_coverage=1 00:03:45.923 --rc genhtml_branch_coverage=1 00:03:45.923 --rc genhtml_function_coverage=1 00:03:45.923 --rc genhtml_legend=1 00:03:45.923 --rc geninfo_all_blocks=1 00:03:45.923 ' 00:03:45.923 16:02:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:45.923 --rc lcov_branch_coverage=1 00:03:45.923 --rc lcov_function_coverage=1 00:03:45.923 --rc genhtml_branch_coverage=1 00:03:45.923 --rc genhtml_function_coverage=1 00:03:45.923 --rc genhtml_legend=1 00:03:45.923 --rc geninfo_all_blocks=1 00:03:45.923 --no-external' 00:03:45.923 16:02:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:45.923 --rc lcov_branch_coverage=1 00:03:45.923 --rc lcov_function_coverage=1 00:03:45.923 --rc genhtml_branch_coverage=1 00:03:45.923 --rc genhtml_function_coverage=1 00:03:45.923 --rc genhtml_legend=1 00:03:45.923 --rc geninfo_all_blocks=1 00:03:45.923 --no-external' 00:03:45.923 16:02:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:46.182 lcov: LCOV version 1.14 00:03:46.182 16:02:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:01.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:01.079 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:15.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:15.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:15.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:15.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:19.261 16:03:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:19.261 16:03:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:19.261 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:04:19.261 16:03:01 -- spdk/autotest.sh@91 -- # rm -f 00:04:19.261 16:03:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.200 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:20.458 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:20.458 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:20.458 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:20.458 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:20.458 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:20.458 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:20.458 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:20.458 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:20.458 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:20.458 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:20.458 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:20.458 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:20.458 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:20.458 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:20.458 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:20.458 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:20.715 16:03:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:20.715 16:03:03 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:20.715 16:03:03 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:20.715 16:03:03 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:20.715 16:03:03 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:20.715 16:03:03 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:20.715 16:03:03 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:20.715 16:03:03 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.715 16:03:03 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:20.715 16:03:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:20.715 16:03:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.715 16:03:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.715 16:03:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:20.715 16:03:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:20.715 16:03:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.715 No valid GPT data, bailing 00:04:20.715 16:03:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.715 16:03:03 -- scripts/common.sh@391 -- # pt= 00:04:20.715 16:03:03 -- scripts/common.sh@392 -- # return 1 00:04:20.715 16:03:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.715 1+0 records in 00:04:20.715 1+0 records out 00:04:20.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234145 s, 448 MB/s 00:04:20.715 16:03:03 -- spdk/autotest.sh@118 -- # sync 00:04:20.715 16:03:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.715 16:03:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.715 16:03:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:22.622 16:03:05 -- spdk/autotest.sh@124 -- # uname -s 00:04:22.622 16:03:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:22.622 16:03:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:22.622 16:03:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.622 16:03:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.622 16:03:05 -- common/autotest_common.sh@10 -- # set +x 00:04:22.622 ************************************ 00:04:22.622 START TEST setup.sh 00:04:22.622 ************************************ 00:04:22.622 16:03:05 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:22.622 * Looking for test storage... 00:04:22.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.622 16:03:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:22.622 16:03:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:22.622 16:03:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:22.622 16:03:05 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.622 16:03:05 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.622 16:03:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.622 ************************************ 00:04:22.622 START TEST acl 00:04:22.622 ************************************ 00:04:22.622 16:03:05 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:22.881 * Looking for test storage... 00:04:22.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.881 16:03:05 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:22.881 16:03:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:22.881 16:03:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.881 16:03:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.253 16:03:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:24.253 16:03:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:24.253 16:03:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.253 16:03:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:24.253 16:03:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.253 16:03:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.656 Hugepages 00:04:25.656 node hugesize free / total 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 00:04:25.656 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:25.656 16:03:08 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:25.656 16:03:08 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.656 16:03:08 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.656 16:03:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.656 ************************************ 00:04:25.656 START TEST denied 00:04:25.656 ************************************ 00:04:25.656 16:03:08 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:25.656 16:03:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:25.656 16:03:08 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:25.656 16:03:08 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.656 16:03:08 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.656 16:03:08 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:27.049 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.049 16:03:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.581 00:04:29.581 real 0m4.055s 00:04:29.581 user 0m1.144s 00:04:29.581 sys 0m1.967s 00:04:29.581 16:03:12 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.581 16:03:12 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:29.581 ************************************ 00:04:29.581 END TEST denied 00:04:29.581 ************************************ 00:04:29.581 16:03:12 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:29.581 16:03:12 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.581 16:03:12 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.581 16:03:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.839 ************************************ 00:04:29.839 START TEST allowed 00:04:29.839 ************************************ 00:04:29.839 16:03:12 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:29.839 16:03:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:29.839 16:03:12 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:29.839 16:03:12 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:29.839 16:03:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.839 16:03:12 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.371 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.371 16:03:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:32.371 16:03:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:32.371 16:03:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:32.371 16:03:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.371 16:03:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.746 00:04:33.746 real 0m4.021s 00:04:33.746 user 0m1.124s 00:04:33.746 sys 0m1.774s 00:04:33.746 16:03:16 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.746 16:03:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:33.746 ************************************ 00:04:33.746 END TEST allowed 00:04:33.746 ************************************ 00:04:33.746 00:04:33.746 real 0m11.025s 00:04:33.746 user 0m3.406s 00:04:33.746 sys 0m5.646s 00:04:33.746 16:03:16 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.746 16:03:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.746 ************************************ 00:04:33.746 END TEST acl 00:04:33.746 ************************************ 00:04:33.746 16:03:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.746 16:03:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.746 16:03:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.746 16:03:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.746 ************************************ 00:04:33.746 START TEST hugepages 00:04:33.746 ************************************ 00:04:33.746 16:03:16 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.746 * Looking for test storage... 00:04:33.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.746 16:03:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 24627252 kB' 'MemAvailable: 28211632 kB' 'Buffers: 2704 kB' 'Cached: 12716420 kB' 'SwapCached: 0 kB' 'Active: 9731844 kB' 'Inactive: 3505876 kB' 'Active(anon): 9341152 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521964 kB' 'Mapped: 216560 kB' 'Shmem: 8822556 kB' 'KReclaimable: 199268 kB' 'Slab: 550788 kB' 'SReclaimable: 199268 kB' 'SUnreclaim: 351520 kB' 'KernelStack: 12432 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 10473580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195456 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.747 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.007 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.008 16:03:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:34.008 16:03:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.008 16:03:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.008 16:03:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.008 ************************************ 00:04:34.008 START TEST default_setup 00:04:34.008 ************************************ 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.008 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.009 16:03:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.386 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.386 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:35.386 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:36.328 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.328 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26712680 kB' 'MemAvailable: 30297064 kB' 'Buffers: 2704 kB' 'Cached: 12716520 kB' 'SwapCached: 0 kB' 'Active: 9750032 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359340 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539912 kB' 'Mapped: 216608 kB' 'Shmem: 8822656 kB' 'KReclaimable: 199276 kB' 'Slab: 550228 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 350952 kB' 'KernelStack: 12288 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10493936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.329 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26718184 kB' 'MemAvailable: 30302568 kB' 'Buffers: 2704 kB' 'Cached: 12716520 kB' 'SwapCached: 0 kB' 'Active: 9749824 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359132 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539804 kB' 'Mapped: 216596 kB' 'Shmem: 8822656 kB' 'KReclaimable: 199276 kB' 'Slab: 550316 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351040 kB' 'KernelStack: 12336 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10493952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.330 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.331 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.332 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.332 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.332 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.332 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26718844 kB' 'MemAvailable: 30303228 kB' 'Buffers: 2704 kB' 'Cached: 12716540 kB' 'SwapCached: 0 kB' 'Active: 9749480 kB' 'Inactive: 3505876 kB' 'Active(anon): 9358788 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539444 kB' 'Mapped: 216596 kB' 'Shmem: 8822676 kB' 'KReclaimable: 199276 kB' 'Slab: 550380 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351104 kB' 'KernelStack: 12336 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10493976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.594 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.595 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.596 nr_hugepages=1024 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.596 resv_hugepages=0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.596 surplus_hugepages=0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.596 anon_hugepages=0 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.596 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26718368 kB' 'MemAvailable: 30302752 kB' 'Buffers: 2704 kB' 'Cached: 12716560 kB' 'SwapCached: 0 kB' 'Active: 9749516 kB' 'Inactive: 3505876 kB' 'Active(anon): 9358824 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539444 kB' 'Mapped: 216596 kB' 'Shmem: 8822696 kB' 'KReclaimable: 199276 kB' 'Slab: 550380 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351104 kB' 'KernelStack: 12336 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10493996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.597 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.598 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18515436 kB' 'MemUsed: 6056920 kB' 'SwapCached: 0 kB' 'Active: 3069792 kB' 'Inactive: 73112 kB' 'Active(anon): 2939372 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827376 kB' 'Mapped: 98772 kB' 'AnonPages: 318652 kB' 'Shmem: 2623844 kB' 'KernelStack: 7048 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 220940 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.599 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.600 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.601 node0=1024 expecting 1024 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.601 00:04:36.601 real 0m2.590s 00:04:36.601 user 0m0.745s 00:04:36.601 sys 0m0.975s 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.601 16:03:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:36.601 ************************************ 00:04:36.601 END TEST default_setup 00:04:36.601 ************************************ 00:04:36.601 16:03:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:36.601 16:03:19 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.601 16:03:19 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.601 16:03:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.601 ************************************ 00:04:36.601 START TEST per_node_1G_alloc 00:04:36.601 ************************************ 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.601 16:03:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.985 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.985 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.985 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.985 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.985 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.985 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.985 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.985 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.985 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.985 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.985 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.985 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.985 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.985 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.985 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.985 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.985 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.985 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26728656 kB' 'MemAvailable: 30313040 kB' 'Buffers: 2704 kB' 'Cached: 12716636 kB' 'SwapCached: 0 kB' 'Active: 9750656 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359964 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540464 kB' 'Mapped: 216724 kB' 'Shmem: 8822772 kB' 'KReclaimable: 199276 kB' 'Slab: 550452 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351176 kB' 'KernelStack: 12368 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10494056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.986 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.987 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26728656 kB' 'MemAvailable: 30313040 kB' 'Buffers: 2704 kB' 'Cached: 12716636 kB' 'SwapCached: 0 kB' 'Active: 9750648 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359956 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540460 kB' 'Mapped: 216684 kB' 'Shmem: 8822772 kB' 'KReclaimable: 199276 kB' 'Slab: 550444 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351168 kB' 'KernelStack: 12384 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10494072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.988 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.989 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26728980 kB' 'MemAvailable: 30313364 kB' 'Buffers: 2704 kB' 'Cached: 12716660 kB' 'SwapCached: 0 kB' 'Active: 9750196 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359504 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539992 kB' 'Mapped: 216608 kB' 'Shmem: 8822796 kB' 'KReclaimable: 199276 kB' 'Slab: 550460 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351184 kB' 'KernelStack: 12368 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10494096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.990 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.991 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.992 nr_hugepages=1024 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.992 resv_hugepages=0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.992 surplus_hugepages=0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.992 anon_hugepages=0 00:04:37.992 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26728980 kB' 'MemAvailable: 30313364 kB' 'Buffers: 2704 kB' 'Cached: 12716664 kB' 'SwapCached: 0 kB' 'Active: 9749932 kB' 'Inactive: 3505876 kB' 'Active(anon): 9359240 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539724 kB' 'Mapped: 216608 kB' 'Shmem: 8822800 kB' 'KReclaimable: 199276 kB' 'Slab: 550460 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351184 kB' 'KernelStack: 12368 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10494120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.993 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.994 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 19563112 kB' 'MemUsed: 5009244 kB' 'SwapCached: 0 kB' 'Active: 3070040 kB' 'Inactive: 73112 kB' 'Active(anon): 2939620 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827384 kB' 'Mapped: 98772 kB' 'AnonPages: 318880 kB' 'Shmem: 2623852 kB' 'KernelStack: 7064 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 220952 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.995 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.996 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.996 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.996 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.996 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.255 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.256 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7165112 kB' 'MemUsed: 12289204 kB' 'SwapCached: 0 kB' 'Active: 6680176 kB' 'Inactive: 3432764 kB' 'Active(anon): 6419904 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432764 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9892048 kB' 'Mapped: 117836 kB' 'AnonPages: 220992 kB' 'Shmem: 6199012 kB' 'KernelStack: 5288 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 143208 kB' 'Slab: 329508 kB' 'SReclaimable: 143208 kB' 'SUnreclaim: 186300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.257 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:38.258 node0=512 expecting 512 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:38.258 node1=512 expecting 512 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:38.258 00:04:38.258 real 0m1.569s 00:04:38.258 user 0m0.676s 00:04:38.258 sys 0m0.861s 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.258 16:03:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.258 ************************************ 00:04:38.258 END TEST per_node_1G_alloc 00:04:38.258 ************************************ 00:04:38.258 16:03:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:38.258 16:03:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.258 16:03:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.258 16:03:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.258 ************************************ 00:04:38.258 START TEST even_2G_alloc 00:04:38.258 ************************************ 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.258 16:03:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.639 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.639 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.639 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.639 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.639 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.639 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.639 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.639 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.639 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.639 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.639 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.639 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.639 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.639 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.639 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.639 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.639 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26728204 kB' 'MemAvailable: 30312588 kB' 'Buffers: 2704 kB' 'Cached: 12716772 kB' 'SwapCached: 0 kB' 'Active: 9748148 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357456 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537692 kB' 'Mapped: 215844 kB' 'Shmem: 8822908 kB' 'KReclaimable: 199276 kB' 'Slab: 550588 kB' 'SReclaimable: 199276 kB' 'SUnreclaim: 351312 kB' 'KernelStack: 12320 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10480828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.639 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26729932 kB' 'MemAvailable: 30314300 kB' 'Buffers: 2704 kB' 'Cached: 12716776 kB' 'SwapCached: 0 kB' 'Active: 9747764 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357072 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537340 kB' 'Mapped: 215824 kB' 'Shmem: 8822912 kB' 'KReclaimable: 199244 kB' 'Slab: 550484 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351240 kB' 'KernelStack: 12320 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10480844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.640 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.641 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26731268 kB' 'MemAvailable: 30315636 kB' 'Buffers: 2704 kB' 'Cached: 12716792 kB' 'SwapCached: 0 kB' 'Active: 9747636 kB' 'Inactive: 3505876 kB' 'Active(anon): 9356944 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537220 kB' 'Mapped: 215768 kB' 'Shmem: 8822928 kB' 'KReclaimable: 199244 kB' 'Slab: 550464 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351220 kB' 'KernelStack: 12304 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10480500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.642 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.643 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.644 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.645 nr_hugepages=1024 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.645 resv_hugepages=0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.645 surplus_hugepages=0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.645 anon_hugepages=0 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26732600 kB' 'MemAvailable: 30316968 kB' 'Buffers: 2704 kB' 'Cached: 12716812 kB' 'SwapCached: 0 kB' 'Active: 9747604 kB' 'Inactive: 3505876 kB' 'Active(anon): 9356912 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537220 kB' 'Mapped: 215768 kB' 'Shmem: 8822948 kB' 'KReclaimable: 199244 kB' 'Slab: 550464 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351220 kB' 'KernelStack: 12304 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10480524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.645 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.646 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 19578208 kB' 'MemUsed: 4994148 kB' 'SwapCached: 0 kB' 'Active: 3069272 kB' 'Inactive: 73112 kB' 'Active(anon): 2938852 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827396 kB' 'Mapped: 98772 kB' 'AnonPages: 318184 kB' 'Shmem: 2623864 kB' 'KernelStack: 7064 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 220952 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.647 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7158068 kB' 'MemUsed: 12296248 kB' 'SwapCached: 0 kB' 'Active: 6678500 kB' 'Inactive: 3432764 kB' 'Active(anon): 6418228 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432764 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9892168 kB' 'Mapped: 116996 kB' 'AnonPages: 219140 kB' 'Shmem: 6199132 kB' 'KernelStack: 5272 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 143176 kB' 'Slab: 329492 kB' 'SReclaimable: 143176 kB' 'SUnreclaim: 186316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.648 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.649 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.650 node0=512 expecting 512 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.650 node1=512 expecting 512 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.650 00:04:39.650 real 0m1.517s 00:04:39.650 user 0m0.653s 00:04:39.650 sys 0m0.832s 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.650 16:03:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.650 ************************************ 00:04:39.650 END TEST even_2G_alloc 00:04:39.650 ************************************ 00:04:39.650 16:03:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:39.650 16:03:22 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.650 16:03:22 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.650 16:03:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.650 ************************************ 00:04:39.650 START TEST odd_alloc 00:04:39.650 ************************************ 00:04:39.650 16:03:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:39.650 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:39.650 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:39.650 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.910 16:03:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.848 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.848 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.848 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.848 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.848 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.848 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.848 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.848 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.848 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.848 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.848 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.848 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.848 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.848 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.848 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.848 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.848 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26730628 kB' 'MemAvailable: 30314996 kB' 'Buffers: 2704 kB' 'Cached: 12716908 kB' 'SwapCached: 0 kB' 'Active: 9748152 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357460 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537680 kB' 'Mapped: 215820 kB' 'Shmem: 8823044 kB' 'KReclaimable: 199244 kB' 'Slab: 550516 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351272 kB' 'KernelStack: 12304 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10481252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.111 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26733452 kB' 'MemAvailable: 30317820 kB' 'Buffers: 2704 kB' 'Cached: 12716908 kB' 'SwapCached: 0 kB' 'Active: 9747500 kB' 'Inactive: 3505876 kB' 'Active(anon): 9356808 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536968 kB' 'Mapped: 215808 kB' 'Shmem: 8823044 kB' 'KReclaimable: 199244 kB' 'Slab: 550508 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351264 kB' 'KernelStack: 12304 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10481268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.112 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.113 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26734360 kB' 'MemAvailable: 30318728 kB' 'Buffers: 2704 kB' 'Cached: 12716928 kB' 'SwapCached: 0 kB' 'Active: 9747668 kB' 'Inactive: 3505876 kB' 'Active(anon): 9356976 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537112 kB' 'Mapped: 215808 kB' 'Shmem: 8823064 kB' 'KReclaimable: 199244 kB' 'Slab: 550524 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351280 kB' 'KernelStack: 12320 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10481288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.114 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.115 nr_hugepages=1025 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.115 resv_hugepages=0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.115 surplus_hugepages=0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.115 anon_hugepages=0 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.115 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26734360 kB' 'MemAvailable: 30318728 kB' 'Buffers: 2704 kB' 'Cached: 12716948 kB' 'SwapCached: 0 kB' 'Active: 9747692 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357000 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537116 kB' 'Mapped: 215808 kB' 'Shmem: 8823084 kB' 'KReclaimable: 199244 kB' 'Slab: 550524 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351280 kB' 'KernelStack: 12320 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 10481312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.116 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 19586708 kB' 'MemUsed: 4985648 kB' 'SwapCached: 0 kB' 'Active: 3069136 kB' 'Inactive: 73112 kB' 'Active(anon): 2938716 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827408 kB' 'Mapped: 98772 kB' 'AnonPages: 318000 kB' 'Shmem: 2623876 kB' 'KernelStack: 7048 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 220980 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.117 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7146408 kB' 'MemUsed: 12307908 kB' 'SwapCached: 0 kB' 'Active: 6680124 kB' 'Inactive: 3432764 kB' 'Active(anon): 6419852 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432764 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9892284 kB' 'Mapped: 117472 kB' 'AnonPages: 220688 kB' 'Shmem: 6199248 kB' 'KernelStack: 5272 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 143176 kB' 'Slab: 329544 kB' 'SReclaimable: 143176 kB' 'SUnreclaim: 186368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.377 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.378 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:41.379 node0=512 expecting 513 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:41.379 node1=513 expecting 512 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:41.379 00:04:41.379 real 0m1.505s 00:04:41.379 user 0m0.630s 00:04:41.379 sys 0m0.845s 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.379 16:03:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.379 ************************************ 00:04:41.379 END TEST odd_alloc 00:04:41.379 ************************************ 00:04:41.379 16:03:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.379 16:03:24 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.379 16:03:24 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.379 16:03:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.379 ************************************ 00:04:41.379 START TEST custom_alloc 00:04:41.379 ************************************ 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.379 16:03:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.312 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.312 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.312 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.312 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.312 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.312 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.312 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.312 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.313 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.313 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.313 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.313 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.313 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.313 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.313 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.313 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.313 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25670416 kB' 'MemAvailable: 29254784 kB' 'Buffers: 2704 kB' 'Cached: 12717032 kB' 'SwapCached: 0 kB' 'Active: 9748228 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357536 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537692 kB' 'Mapped: 216132 kB' 'Shmem: 8823168 kB' 'KReclaimable: 199244 kB' 'Slab: 550364 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351120 kB' 'KernelStack: 12288 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10481512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.576 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25679956 kB' 'MemAvailable: 29264324 kB' 'Buffers: 2704 kB' 'Cached: 12717032 kB' 'SwapCached: 0 kB' 'Active: 9747976 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357284 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537368 kB' 'Mapped: 215916 kB' 'Shmem: 8823168 kB' 'KReclaimable: 199244 kB' 'Slab: 550276 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351032 kB' 'KernelStack: 12336 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10481528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:42.577 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.578 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25681092 kB' 'MemAvailable: 29265460 kB' 'Buffers: 2704 kB' 'Cached: 12717056 kB' 'SwapCached: 0 kB' 'Active: 9747912 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357220 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537248 kB' 'Mapped: 215792 kB' 'Shmem: 8823192 kB' 'KReclaimable: 199244 kB' 'Slab: 550304 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351060 kB' 'KernelStack: 12336 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10481548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.579 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.580 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:42.581 nr_hugepages=1536 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.581 resv_hugepages=0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.581 surplus_hugepages=0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.581 anon_hugepages=0 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 25680088 kB' 'MemAvailable: 29264456 kB' 'Buffers: 2704 kB' 'Cached: 12717076 kB' 'SwapCached: 0 kB' 'Active: 9748272 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357580 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537632 kB' 'Mapped: 215792 kB' 'Shmem: 8823212 kB' 'KReclaimable: 199244 kB' 'Slab: 550304 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351060 kB' 'KernelStack: 12352 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 10481572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.581 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.582 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.583 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 19594252 kB' 'MemUsed: 4978104 kB' 'SwapCached: 0 kB' 'Active: 3069824 kB' 'Inactive: 73112 kB' 'Active(anon): 2939404 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827492 kB' 'Mapped: 98772 kB' 'AnonPages: 318600 kB' 'Shmem: 2623960 kB' 'KernelStack: 7016 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 220972 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 6083356 kB' 'MemUsed: 13370960 kB' 'SwapCached: 0 kB' 'Active: 6678028 kB' 'Inactive: 3432764 kB' 'Active(anon): 6417756 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432764 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9892308 kB' 'Mapped: 117020 kB' 'AnonPages: 218568 kB' 'Shmem: 6199272 kB' 'KernelStack: 5288 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 143176 kB' 'Slab: 329324 kB' 'SReclaimable: 143176 kB' 'SUnreclaim: 186148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.845 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.846 node0=512 expecting 512 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:42.846 node1=1024 expecting 1024 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:42.846 00:04:42.846 real 0m1.436s 00:04:42.846 user 0m0.609s 00:04:42.846 sys 0m0.800s 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.846 16:03:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.846 ************************************ 00:04:42.846 END TEST custom_alloc 00:04:42.846 ************************************ 00:04:42.846 16:03:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:42.846 16:03:25 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.846 16:03:25 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.846 16:03:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.846 ************************************ 00:04:42.846 START TEST no_shrink_alloc 00:04:42.846 ************************************ 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.846 16:03:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.226 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.226 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.226 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.226 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.226 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.226 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.226 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.226 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.226 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.226 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.226 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.226 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.226 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.226 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.226 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.226 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.226 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.226 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26681708 kB' 'MemAvailable: 30266076 kB' 'Buffers: 2704 kB' 'Cached: 12717168 kB' 'SwapCached: 0 kB' 'Active: 9747860 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357168 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537104 kB' 'Mapped: 215856 kB' 'Shmem: 8823304 kB' 'KReclaimable: 199244 kB' 'Slab: 550360 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351116 kB' 'KernelStack: 12336 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10481772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.227 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26680712 kB' 'MemAvailable: 30265080 kB' 'Buffers: 2704 kB' 'Cached: 12717172 kB' 'SwapCached: 0 kB' 'Active: 9748656 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357964 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537916 kB' 'Mapped: 215892 kB' 'Shmem: 8823308 kB' 'KReclaimable: 199244 kB' 'Slab: 550356 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351112 kB' 'KernelStack: 12384 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10481792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.228 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.229 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26680140 kB' 'MemAvailable: 30264508 kB' 'Buffers: 2704 kB' 'Cached: 12717188 kB' 'SwapCached: 0 kB' 'Active: 9748892 kB' 'Inactive: 3505876 kB' 'Active(anon): 9358200 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538148 kB' 'Mapped: 215892 kB' 'Shmem: 8823324 kB' 'KReclaimable: 199244 kB' 'Slab: 550356 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351112 kB' 'KernelStack: 12352 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10483180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.230 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.231 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.232 nr_hugepages=1024 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.232 resv_hugepages=0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.232 surplus_hugepages=0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.232 anon_hugepages=0 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26679980 kB' 'MemAvailable: 30264348 kB' 'Buffers: 2704 kB' 'Cached: 12717212 kB' 'SwapCached: 0 kB' 'Active: 9748480 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357788 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537728 kB' 'Mapped: 215816 kB' 'Shmem: 8823348 kB' 'KReclaimable: 199244 kB' 'Slab: 550344 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351100 kB' 'KernelStack: 12432 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10482836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.232 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.233 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18533448 kB' 'MemUsed: 6038908 kB' 'SwapCached: 0 kB' 'Active: 3070640 kB' 'Inactive: 73112 kB' 'Active(anon): 2940220 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827560 kB' 'Mapped: 98772 kB' 'AnonPages: 319324 kB' 'Shmem: 2624028 kB' 'KernelStack: 7384 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 221060 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 164992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.234 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.235 node0=1024 expecting 1024 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.235 16:03:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.615 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.615 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:45.615 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.615 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.615 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.615 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.615 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.615 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.615 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.615 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:45.615 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:45.615 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:45.615 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:45.615 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:45.615 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:45.615 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:45.615 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:45.615 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26676688 kB' 'MemAvailable: 30261056 kB' 'Buffers: 2704 kB' 'Cached: 12717276 kB' 'SwapCached: 0 kB' 'Active: 9748508 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357816 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537604 kB' 'Mapped: 215964 kB' 'Shmem: 8823412 kB' 'KReclaimable: 199244 kB' 'Slab: 550364 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351120 kB' 'KernelStack: 12336 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10481880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26677276 kB' 'MemAvailable: 30261644 kB' 'Buffers: 2704 kB' 'Cached: 12717280 kB' 'SwapCached: 0 kB' 'Active: 9748832 kB' 'Inactive: 3505876 kB' 'Active(anon): 9358140 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538032 kB' 'Mapped: 215900 kB' 'Shmem: 8823416 kB' 'KReclaimable: 199244 kB' 'Slab: 550364 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351120 kB' 'KernelStack: 12400 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10484808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26677188 kB' 'MemAvailable: 30261556 kB' 'Buffers: 2704 kB' 'Cached: 12717296 kB' 'SwapCached: 0 kB' 'Active: 9748548 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357856 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537696 kB' 'Mapped: 215892 kB' 'Shmem: 8823432 kB' 'KReclaimable: 199244 kB' 'Slab: 550360 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351116 kB' 'KernelStack: 12336 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10481920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.905 nr_hugepages=1024 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.905 resv_hugepages=0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.905 surplus_hugepages=0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.905 anon_hugepages=0 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.905 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 26676432 kB' 'MemAvailable: 30260800 kB' 'Buffers: 2704 kB' 'Cached: 12717300 kB' 'SwapCached: 0 kB' 'Active: 9748092 kB' 'Inactive: 3505876 kB' 'Active(anon): 9357400 kB' 'Inactive(anon): 0 kB' 'Active(file): 390692 kB' 'Inactive(file): 3505876 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537136 kB' 'Mapped: 215824 kB' 'Shmem: 8823436 kB' 'KReclaimable: 199244 kB' 'Slab: 550400 kB' 'SReclaimable: 199244 kB' 'SUnreclaim: 351156 kB' 'KernelStack: 12336 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 10481944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 17141760 kB' 'DirectMap1G: 33554432 kB' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.906 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.907 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 18525660 kB' 'MemUsed: 6046696 kB' 'SwapCached: 0 kB' 'Active: 3069352 kB' 'Inactive: 73112 kB' 'Active(anon): 2938932 kB' 'Inactive(anon): 0 kB' 'Active(file): 130420 kB' 'Inactive(file): 73112 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2827560 kB' 'Mapped: 98772 kB' 'AnonPages: 318000 kB' 'Shmem: 2624028 kB' 'KernelStack: 7016 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 56068 kB' 'Slab: 221112 kB' 'SReclaimable: 56068 kB' 'SUnreclaim: 165044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.908 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.909 node0=1024 expecting 1024 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.909 00:04:45.909 real 0m2.997s 00:04:45.909 user 0m1.240s 00:04:45.909 sys 0m1.708s 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.909 16:03:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.909 ************************************ 00:04:45.909 END TEST no_shrink_alloc 00:04:45.909 ************************************ 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:45.909 16:03:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:45.909 00:04:45.909 real 0m12.018s 00:04:45.909 user 0m4.714s 00:04:45.909 sys 0m6.285s 00:04:45.909 16:03:28 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.909 16:03:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.909 ************************************ 00:04:45.909 END TEST hugepages 00:04:45.909 ************************************ 00:04:45.909 16:03:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:45.910 16:03:28 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.910 16:03:28 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.910 16:03:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.910 ************************************ 00:04:45.910 START TEST driver 00:04:45.910 ************************************ 00:04:45.910 16:03:28 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:45.910 * Looking for test storage... 00:04:45.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.910 16:03:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:45.910 16:03:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.910 16:03:28 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.480 16:03:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:48.480 16:03:31 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.480 16:03:31 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.480 16:03:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:48.480 ************************************ 00:04:48.480 START TEST guess_driver 00:04:48.480 ************************************ 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:48.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:48.480 Looking for driver=vfio-pci 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.480 16:03:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.859 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.860 16:03:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.798 16:03:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.335 00:04:53.335 real 0m5.041s 00:04:53.335 user 0m1.133s 00:04:53.335 sys 0m1.912s 00:04:53.335 16:03:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.335 16:03:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.335 ************************************ 00:04:53.335 END TEST guess_driver 00:04:53.335 ************************************ 00:04:53.335 00:04:53.335 real 0m7.569s 00:04:53.335 user 0m1.666s 00:04:53.335 sys 0m2.907s 00:04:53.335 16:03:36 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.335 16:03:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.335 ************************************ 00:04:53.335 END TEST driver 00:04:53.335 ************************************ 00:04:53.335 16:03:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.335 16:03:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.335 16:03:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.335 16:03:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.595 ************************************ 00:04:53.595 START TEST devices 00:04:53.595 ************************************ 00:04:53.595 16:03:36 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.595 * Looking for test storage... 00:04:53.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:53.595 16:03:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.595 16:03:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.595 16:03:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.595 16:03:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:54.974 16:03:37 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.974 No valid GPT data, bailing 00:04:54.974 16:03:37 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:54.974 16:03:37 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.974 16:03:37 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.974 16:03:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.974 ************************************ 00:04:54.974 START TEST nvme_mount 00:04:54.974 ************************************ 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.974 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.975 16:03:37 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:55.913 Creating new GPT entries in memory. 00:04:55.913 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.913 other utilities. 00:04:55.913 16:03:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.913 16:03:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.913 16:03:38 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.913 16:03:38 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.913 16:03:38 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.291 Creating new GPT entries in memory. 00:04:57.291 The operation has completed successfully. 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 178090 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.291 16:03:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.234 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.492 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.492 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.750 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:58.750 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:58.750 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.750 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.750 16:03:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.127 16:03:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:00.127 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.128 16:03:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.510 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.511 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.511 00:05:01.511 real 0m6.639s 00:05:01.511 user 0m1.632s 00:05:01.511 sys 0m2.611s 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.511 16:03:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.511 ************************************ 00:05:01.511 END TEST nvme_mount 00:05:01.511 ************************************ 00:05:01.511 16:03:44 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:01.511 16:03:44 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.511 16:03:44 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.511 16:03:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.773 ************************************ 00:05:01.773 START TEST dm_mount 00:05:01.773 ************************************ 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.773 16:03:44 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:02.710 Creating new GPT entries in memory. 00:05:02.710 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.710 other utilities. 00:05:02.710 16:03:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.710 16:03:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.710 16:03:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.710 16:03:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.710 16:03:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.647 Creating new GPT entries in memory. 00:05:03.647 The operation has completed successfully. 00:05:03.647 16:03:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.647 16:03:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.647 16:03:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.647 16:03:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.647 16:03:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:04.581 The operation has completed successfully. 00:05:04.581 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.581 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.581 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 180496 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.838 16:03:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.234 16:03:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.169 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.428 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.428 00:05:07.428 real 0m5.792s 00:05:07.428 user 0m1.005s 00:05:07.428 sys 0m1.686s 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.428 16:03:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.428 ************************************ 00:05:07.428 END TEST dm_mount 00:05:07.428 ************************************ 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.428 16:03:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.686 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:07.686 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:07.686 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.686 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.686 16:03:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:07.686 00:05:07.686 real 0m14.266s 00:05:07.686 user 0m3.245s 00:05:07.686 sys 0m5.297s 00:05:07.686 16:03:50 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.686 16:03:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:07.686 ************************************ 00:05:07.686 END TEST devices 00:05:07.686 ************************************ 00:05:07.686 00:05:07.686 real 0m45.117s 00:05:07.686 user 0m13.129s 00:05:07.686 sys 0m20.296s 00:05:07.686 16:03:50 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.686 16:03:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.686 ************************************ 00:05:07.686 END TEST setup.sh 00:05:07.686 ************************************ 00:05:07.686 16:03:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:09.061 Hugepages 00:05:09.061 node hugesize free / total 00:05:09.061 node0 1048576kB 0 / 0 00:05:09.061 node0 2048kB 2048 / 2048 00:05:09.061 node1 1048576kB 0 / 0 00:05:09.061 node1 2048kB 0 / 0 00:05:09.061 00:05:09.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.061 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:09.061 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:09.061 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:09.061 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:09.061 16:03:52 -- spdk/autotest.sh@130 -- # uname -s 00:05:09.061 16:03:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:09.061 16:03:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:09.061 16:03:52 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.432 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.432 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:10.432 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:10.690 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.629 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.629 16:03:54 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:12.564 16:03:55 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:12.564 16:03:55 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:12.564 16:03:55 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.564 16:03:55 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:12.564 16:03:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:12.564 16:03:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:12.564 16:03:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.564 16:03:55 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.564 16:03:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:12.564 16:03:55 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:12.564 16:03:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:82:00.0 00:05:12.564 16:03:55 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.936 Waiting for block devices as requested 00:05:13.936 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:13.936 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:13.936 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.194 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.194 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.194 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.453 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.453 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.453 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:14.453 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:14.711 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:14.711 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:14.711 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:14.711 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:14.997 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:14.997 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:14.997 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:15.283 16:03:58 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:15.283 16:03:58 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1498 -- # grep 0000:82:00.0/nvme/nvme 00:05:15.283 16:03:58 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:15.283 16:03:58 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:15.283 16:03:58 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:15.283 16:03:58 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:15.283 16:03:58 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:15.283 16:03:58 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:15.283 16:03:58 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:15.283 16:03:58 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:15.283 16:03:58 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:15.283 16:03:58 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:15.283 16:03:58 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:15.283 16:03:58 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:15.283 16:03:58 -- common/autotest_common.sh@1553 -- # continue 00:05:15.283 16:03:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:15.283 16:03:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.283 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.283 16:03:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:15.283 16:03:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:15.283 16:03:58 -- common/autotest_common.sh@10 -- # set +x 00:05:15.283 16:03:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.661 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.661 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.661 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.597 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.597 16:04:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:17.597 16:04:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.597 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:05:17.597 16:04:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:17.597 16:04:00 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:17.597 16:04:00 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.597 16:04:00 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:17.597 16:04:00 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:17.597 16:04:00 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:17.597 16:04:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:17.597 16:04:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:17.597 16:04:00 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.597 16:04:00 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.597 16:04:00 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:17.861 16:04:00 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:17.861 16:04:00 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:82:00.0 00:05:17.861 16:04:00 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:17.861 16:04:00 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:17.861 16:04:00 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:17.861 16:04:00 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:17.861 16:04:00 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:17.861 16:04:00 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:82:00.0 00:05:17.861 16:04:00 -- common/autotest_common.sh@1588 -- # [[ -z 0000:82:00.0 ]] 00:05:17.861 16:04:00 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=185840 00:05:17.861 16:04:00 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.861 16:04:00 -- common/autotest_common.sh@1594 -- # waitforlisten 185840 00:05:17.861 16:04:00 -- common/autotest_common.sh@827 -- # '[' -z 185840 ']' 00:05:17.861 16:04:00 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.861 16:04:00 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.861 16:04:00 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.861 16:04:00 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.861 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:05:17.861 [2024-07-15 16:04:00.686501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:17.861 [2024-07-15 16:04:00.686589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185840 ] 00:05:17.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.861 [2024-07-15 16:04:00.744766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.861 [2024-07-15 16:04:00.824019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.130 16:04:01 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.130 16:04:01 -- common/autotest_common.sh@860 -- # return 0 00:05:18.130 16:04:01 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:18.130 16:04:01 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:18.130 16:04:01 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:21.413 nvme0n1 00:05:21.413 16:04:04 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:21.413 [2024-07-15 16:04:04.360154] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:21.413 [2024-07-15 16:04:04.360196] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:21.413 request: 00:05:21.413 { 00:05:21.413 "nvme_ctrlr_name": "nvme0", 00:05:21.413 "password": "test", 00:05:21.413 "method": "bdev_nvme_opal_revert", 00:05:21.413 "req_id": 1 00:05:21.413 } 00:05:21.413 Got JSON-RPC error response 00:05:21.413 response: 00:05:21.413 { 00:05:21.413 "code": -32603, 00:05:21.413 "message": "Internal error" 00:05:21.413 } 00:05:21.413 16:04:04 -- common/autotest_common.sh@1600 -- # true 00:05:21.413 16:04:04 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:21.413 16:04:04 -- common/autotest_common.sh@1604 -- # killprocess 185840 00:05:21.413 16:04:04 -- common/autotest_common.sh@946 -- # '[' -z 185840 ']' 00:05:21.413 16:04:04 -- common/autotest_common.sh@950 -- # kill -0 185840 00:05:21.413 16:04:04 -- common/autotest_common.sh@951 -- # uname 00:05:21.413 16:04:04 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.413 16:04:04 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 185840 00:05:21.670 16:04:04 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.670 16:04:04 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.670 16:04:04 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 185840' 00:05:21.670 killing process with pid 185840 00:05:21.670 16:04:04 -- common/autotest_common.sh@965 -- # kill 185840 00:05:21.670 16:04:04 -- common/autotest_common.sh@970 -- # wait 185840 00:05:23.562 16:04:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:23.562 16:04:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:23.562 16:04:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.562 16:04:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:23.562 16:04:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:23.562 16:04:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.562 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.562 16:04:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:23.562 16:04:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.562 16:04:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.562 16:04:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.562 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.562 ************************************ 00:05:23.562 START TEST env 00:05:23.562 ************************************ 00:05:23.562 16:04:06 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:23.562 * Looking for test storage... 00:05:23.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:23.562 16:04:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.562 16:04:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.562 16:04:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.562 16:04:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.562 ************************************ 00:05:23.562 START TEST env_memory 00:05:23.562 ************************************ 00:05:23.562 16:04:06 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:23.562 00:05:23.562 00:05:23.562 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.562 http://cunit.sourceforge.net/ 00:05:23.562 00:05:23.562 00:05:23.562 Suite: memory 00:05:23.562 Test: alloc and free memory map ...[2024-07-15 16:04:06.295483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:23.562 passed 00:05:23.562 Test: mem map translation ...[2024-07-15 16:04:06.316223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:23.562 [2024-07-15 16:04:06.316243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:23.563 [2024-07-15 16:04:06.316299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:23.563 [2024-07-15 16:04:06.316310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:23.563 passed 00:05:23.563 Test: mem map registration ...[2024-07-15 16:04:06.357452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:23.563 [2024-07-15 16:04:06.357470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:23.563 passed 00:05:23.563 Test: mem map adjacent registrations ...passed 00:05:23.563 00:05:23.563 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.563 suites 1 1 n/a 0 0 00:05:23.563 tests 4 4 4 0 0 00:05:23.563 asserts 152 152 152 0 n/a 00:05:23.563 00:05:23.563 Elapsed time = 0.142 seconds 00:05:23.563 00:05:23.563 real 0m0.149s 00:05:23.563 user 0m0.140s 00:05:23.563 sys 0m0.008s 00:05:23.563 16:04:06 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.563 16:04:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:23.563 ************************************ 00:05:23.563 END TEST env_memory 00:05:23.563 ************************************ 00:05:23.563 16:04:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.563 16:04:06 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.563 16:04:06 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.563 16:04:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.563 ************************************ 00:05:23.563 START TEST env_vtophys 00:05:23.563 ************************************ 00:05:23.563 16:04:06 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:23.563 EAL: lib.eal log level changed from notice to debug 00:05:23.563 EAL: Detected lcore 0 as core 0 on socket 0 00:05:23.563 EAL: Detected lcore 1 as core 1 on socket 0 00:05:23.563 EAL: Detected lcore 2 as core 2 on socket 0 00:05:23.563 EAL: Detected lcore 3 as core 3 on socket 0 00:05:23.563 EAL: Detected lcore 4 as core 4 on socket 0 00:05:23.563 EAL: Detected lcore 5 as core 5 on socket 0 00:05:23.563 EAL: Detected lcore 6 as core 8 on socket 0 00:05:23.563 EAL: Detected lcore 7 as core 9 on socket 0 00:05:23.563 EAL: Detected lcore 8 as core 10 on socket 0 00:05:23.563 EAL: Detected lcore 9 as core 11 on socket 0 00:05:23.563 EAL: Detected lcore 10 as core 12 on socket 0 00:05:23.563 EAL: Detected lcore 11 as core 13 on socket 0 00:05:23.563 EAL: Detected lcore 12 as core 0 on socket 1 00:05:23.563 EAL: Detected lcore 13 as core 1 on socket 1 00:05:23.563 EAL: Detected lcore 14 as core 2 on socket 1 00:05:23.563 EAL: Detected lcore 15 as core 3 on socket 1 00:05:23.563 EAL: Detected lcore 16 as core 4 on socket 1 00:05:23.563 EAL: Detected lcore 17 as core 5 on socket 1 00:05:23.563 EAL: Detected lcore 18 as core 8 on socket 1 00:05:23.563 EAL: Detected lcore 19 as core 9 on socket 1 00:05:23.563 EAL: Detected lcore 20 as core 10 on socket 1 00:05:23.563 EAL: Detected lcore 21 as core 11 on socket 1 00:05:23.563 EAL: Detected lcore 22 as core 12 on socket 1 00:05:23.563 EAL: Detected lcore 23 as core 13 on socket 1 00:05:23.563 EAL: Detected lcore 24 as core 0 on socket 0 00:05:23.563 EAL: Detected lcore 25 as core 1 on socket 0 00:05:23.563 EAL: Detected lcore 26 as core 2 on socket 0 00:05:23.563 EAL: Detected lcore 27 as core 3 on socket 0 00:05:23.563 EAL: Detected lcore 28 as core 4 on socket 0 00:05:23.563 EAL: Detected lcore 29 as core 5 on socket 0 00:05:23.563 EAL: Detected lcore 30 as core 8 on socket 0 00:05:23.563 EAL: Detected lcore 31 as core 9 on socket 0 00:05:23.563 EAL: Detected lcore 32 as core 10 on socket 0 00:05:23.563 EAL: Detected lcore 33 as core 11 on socket 0 00:05:23.563 EAL: Detected lcore 34 as core 12 on socket 0 00:05:23.563 EAL: Detected lcore 35 as core 13 on socket 0 00:05:23.563 EAL: Detected lcore 36 as core 0 on socket 1 00:05:23.563 EAL: Detected lcore 37 as core 1 on socket 1 00:05:23.563 EAL: Detected lcore 38 as core 2 on socket 1 00:05:23.563 EAL: Detected lcore 39 as core 3 on socket 1 00:05:23.563 EAL: Detected lcore 40 as core 4 on socket 1 00:05:23.563 EAL: Detected lcore 41 as core 5 on socket 1 00:05:23.563 EAL: Detected lcore 42 as core 8 on socket 1 00:05:23.563 EAL: Detected lcore 43 as core 9 on socket 1 00:05:23.563 EAL: Detected lcore 44 as core 10 on socket 1 00:05:23.563 EAL: Detected lcore 45 as core 11 on socket 1 00:05:23.563 EAL: Detected lcore 46 as core 12 on socket 1 00:05:23.563 EAL: Detected lcore 47 as core 13 on socket 1 00:05:23.563 EAL: Maximum logical cores by configuration: 128 00:05:23.563 EAL: Detected CPU lcores: 48 00:05:23.563 EAL: Detected NUMA nodes: 2 00:05:23.563 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:23.563 EAL: Detected shared linkage of DPDK 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:23.563 EAL: Registered [vdev] bus. 00:05:23.563 EAL: bus.vdev log level changed from disabled to notice 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:23.563 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:23.563 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:23.563 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:23.563 EAL: No shared files mode enabled, IPC will be disabled 00:05:23.563 EAL: No shared files mode enabled, IPC is disabled 00:05:23.563 EAL: Bus pci wants IOVA as 'DC' 00:05:23.563 EAL: Bus vdev wants IOVA as 'DC' 00:05:23.563 EAL: Buses did not request a specific IOVA mode. 00:05:23.563 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:23.563 EAL: Selected IOVA mode 'VA' 00:05:23.563 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.563 EAL: Probing VFIO support... 00:05:23.563 EAL: IOMMU type 1 (Type 1) is supported 00:05:23.563 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:23.563 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:23.563 EAL: VFIO support initialized 00:05:23.563 EAL: Ask a virtual area of 0x2e000 bytes 00:05:23.563 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:23.563 EAL: Setting up physically contiguous memory... 00:05:23.563 EAL: Setting maximum number of open files to 524288 00:05:23.563 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:23.563 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:23.563 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:23.563 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:23.563 EAL: Ask a virtual area of 0x61000 bytes 00:05:23.563 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:23.563 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:23.563 EAL: Ask a virtual area of 0x400000000 bytes 00:05:23.563 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:23.563 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:23.563 EAL: Hugepages will be freed exactly as allocated. 00:05:23.563 EAL: No shared files mode enabled, IPC is disabled 00:05:23.563 EAL: No shared files mode enabled, IPC is disabled 00:05:23.563 EAL: TSC frequency is ~2700000 KHz 00:05:23.563 EAL: Main lcore 0 is ready (tid=7fc50e8b3a00;cpuset=[0]) 00:05:23.563 EAL: Trying to obtain current memory policy. 00:05:23.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.563 EAL: Restoring previous memory policy: 0 00:05:23.563 EAL: request: mp_malloc_sync 00:05:23.563 EAL: No shared files mode enabled, IPC is disabled 00:05:23.563 EAL: Heap on socket 0 was expanded by 2MB 00:05:23.563 EAL: PCI device 0000:0e:00.0 on NUMA socket 0 00:05:23.563 EAL: probe driver: 8086:1583 net_i40e 00:05:23.563 EAL: Not managed by a supported kernel driver, skipped 00:05:23.563 EAL: PCI device 0000:0e:00.1 on NUMA socket 0 00:05:23.564 EAL: probe driver: 8086:1583 net_i40e 00:05:23.564 EAL: Not managed by a supported kernel driver, skipped 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:23.564 EAL: Mem event callback 'spdk:(nil)' registered 00:05:23.564 00:05:23.564 00:05:23.564 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.564 http://cunit.sourceforge.net/ 00:05:23.564 00:05:23.564 00:05:23.564 Suite: components_suite 00:05:23.564 Test: vtophys_malloc_test ...passed 00:05:23.564 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:23.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.564 EAL: Restoring previous memory policy: 4 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was expanded by 4MB 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was shrunk by 4MB 00:05:23.564 EAL: Trying to obtain current memory policy. 00:05:23.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.564 EAL: Restoring previous memory policy: 4 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was expanded by 6MB 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was shrunk by 6MB 00:05:23.564 EAL: Trying to obtain current memory policy. 00:05:23.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.564 EAL: Restoring previous memory policy: 4 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was expanded by 10MB 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was shrunk by 10MB 00:05:23.564 EAL: Trying to obtain current memory policy. 00:05:23.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.564 EAL: Restoring previous memory policy: 4 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was expanded by 18MB 00:05:23.564 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.564 EAL: request: mp_malloc_sync 00:05:23.564 EAL: No shared files mode enabled, IPC is disabled 00:05:23.564 EAL: Heap on socket 0 was shrunk by 18MB 00:05:23.564 EAL: Trying to obtain current memory policy. 00:05:23.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.820 EAL: Restoring previous memory policy: 4 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.820 EAL: Trying to obtain current memory policy. 00:05:23.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.820 EAL: Restoring previous memory policy: 4 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.820 EAL: Trying to obtain current memory policy. 00:05:23.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.820 EAL: Restoring previous memory policy: 4 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.820 EAL: Trying to obtain current memory policy. 00:05:23.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.820 EAL: Restoring previous memory policy: 4 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.820 EAL: request: mp_malloc_sync 00:05:23.820 EAL: No shared files mode enabled, IPC is disabled 00:05:23.820 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.076 EAL: request: mp_malloc_sync 00:05:24.076 EAL: No shared files mode enabled, IPC is disabled 00:05:24.076 EAL: Heap on socket 0 was shrunk by 258MB 00:05:24.076 EAL: Trying to obtain current memory policy. 00:05:24.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.076 EAL: Restoring previous memory policy: 4 00:05:24.076 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.076 EAL: request: mp_malloc_sync 00:05:24.076 EAL: No shared files mode enabled, IPC is disabled 00:05:24.076 EAL: Heap on socket 0 was expanded by 514MB 00:05:24.334 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.334 EAL: request: mp_malloc_sync 00:05:24.334 EAL: No shared files mode enabled, IPC is disabled 00:05:24.334 EAL: Heap on socket 0 was shrunk by 514MB 00:05:24.334 EAL: Trying to obtain current memory policy. 00:05:24.334 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:24.744 EAL: Restoring previous memory policy: 4 00:05:24.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.744 EAL: request: mp_malloc_sync 00:05:24.744 EAL: No shared files mode enabled, IPC is disabled 00:05:24.744 EAL: Heap on socket 0 was expanded by 1026MB 00:05:24.744 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.001 EAL: request: mp_malloc_sync 00:05:25.001 EAL: No shared files mode enabled, IPC is disabled 00:05:25.001 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:25.001 passed 00:05:25.001 00:05:25.001 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.001 suites 1 1 n/a 0 0 00:05:25.001 tests 2 2 2 0 0 00:05:25.001 asserts 497 497 497 0 n/a 00:05:25.001 00:05:25.001 Elapsed time = 1.315 seconds 00:05:25.001 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.001 EAL: request: mp_malloc_sync 00:05:25.001 EAL: No shared files mode enabled, IPC is disabled 00:05:25.001 EAL: Heap on socket 0 was shrunk by 2MB 00:05:25.001 EAL: No shared files mode enabled, IPC is disabled 00:05:25.001 EAL: No shared files mode enabled, IPC is disabled 00:05:25.001 EAL: No shared files mode enabled, IPC is disabled 00:05:25.001 00:05:25.001 real 0m1.419s 00:05:25.001 user 0m0.834s 00:05:25.001 sys 0m0.557s 00:05:25.001 16:04:07 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.001 16:04:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:25.001 ************************************ 00:05:25.001 END TEST env_vtophys 00:05:25.001 ************************************ 00:05:25.001 16:04:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.001 16:04:07 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.001 16:04:07 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.001 16:04:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.001 ************************************ 00:05:25.001 START TEST env_pci 00:05:25.001 ************************************ 00:05:25.001 16:04:07 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:25.001 00:05:25.001 00:05:25.001 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.001 http://cunit.sourceforge.net/ 00:05:25.001 00:05:25.001 00:05:25.001 Suite: pci 00:05:25.001 Test: pci_hook ...[2024-07-15 16:04:07.933184] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 186741 has claimed it 00:05:25.001 EAL: Cannot find device (10000:00:01.0) 00:05:25.001 EAL: Failed to attach device on primary process 00:05:25.001 passed 00:05:25.001 00:05:25.001 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.001 suites 1 1 n/a 0 0 00:05:25.001 tests 1 1 1 0 0 00:05:25.001 asserts 25 25 25 0 n/a 00:05:25.001 00:05:25.001 Elapsed time = 0.022 seconds 00:05:25.001 00:05:25.001 real 0m0.035s 00:05:25.001 user 0m0.009s 00:05:25.001 sys 0m0.026s 00:05:25.001 16:04:07 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.001 16:04:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:25.001 ************************************ 00:05:25.001 END TEST env_pci 00:05:25.001 ************************************ 00:05:25.001 16:04:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:25.001 16:04:07 env -- env/env.sh@15 -- # uname 00:05:25.258 16:04:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:25.258 16:04:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:25.258 16:04:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.258 16:04:07 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:25.258 16:04:07 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.258 16:04:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.258 ************************************ 00:05:25.258 START TEST env_dpdk_post_init 00:05:25.258 ************************************ 00:05:25.258 16:04:08 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:25.258 EAL: Detected CPU lcores: 48 00:05:25.258 EAL: Detected NUMA nodes: 2 00:05:25.258 EAL: Detected shared linkage of DPDK 00:05:25.258 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.258 EAL: Selected IOVA mode 'VA' 00:05:25.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.258 EAL: VFIO support initialized 00:05:25.258 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.258 EAL: Using IOMMU type 1 (Type 1) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:25.258 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:25.516 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:25.516 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:25.516 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:25.516 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:25.516 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:26.082 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:29.360 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:29.360 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:29.618 Starting DPDK initialization... 00:05:29.618 Starting SPDK post initialization... 00:05:29.618 SPDK NVMe probe 00:05:29.619 Attaching to 0000:82:00.0 00:05:29.619 Attached to 0000:82:00.0 00:05:29.619 Cleaning up... 00:05:29.619 00:05:29.619 real 0m4.379s 00:05:29.619 user 0m3.251s 00:05:29.619 sys 0m0.190s 00:05:29.619 16:04:12 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.619 16:04:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 ************************************ 00:05:29.619 END TEST env_dpdk_post_init 00:05:29.619 ************************************ 00:05:29.619 16:04:12 env -- env/env.sh@26 -- # uname 00:05:29.619 16:04:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.619 16:04:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.619 16:04:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.619 16:04:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.619 16:04:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 ************************************ 00:05:29.619 START TEST env_mem_callbacks 00:05:29.619 ************************************ 00:05:29.619 16:04:12 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.619 EAL: Detected CPU lcores: 48 00:05:29.619 EAL: Detected NUMA nodes: 2 00:05:29.619 EAL: Detected shared linkage of DPDK 00:05:29.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.619 EAL: Selected IOVA mode 'VA' 00:05:29.619 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.619 EAL: VFIO support initialized 00:05:29.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.619 00:05:29.619 00:05:29.619 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.619 http://cunit.sourceforge.net/ 00:05:29.619 00:05:29.619 00:05:29.619 Suite: memory 00:05:29.619 Test: test ... 00:05:29.619 register 0x200000200000 2097152 00:05:29.619 malloc 3145728 00:05:29.619 register 0x200000400000 4194304 00:05:29.619 buf 0x200000500000 len 3145728 PASSED 00:05:29.619 malloc 64 00:05:29.619 buf 0x2000004fff40 len 64 PASSED 00:05:29.619 malloc 4194304 00:05:29.619 register 0x200000800000 6291456 00:05:29.619 buf 0x200000a00000 len 4194304 PASSED 00:05:29.619 free 0x200000500000 3145728 00:05:29.619 free 0x2000004fff40 64 00:05:29.619 unregister 0x200000400000 4194304 PASSED 00:05:29.619 free 0x200000a00000 4194304 00:05:29.619 unregister 0x200000800000 6291456 PASSED 00:05:29.619 malloc 8388608 00:05:29.619 register 0x200000400000 10485760 00:05:29.619 buf 0x200000600000 len 8388608 PASSED 00:05:29.619 free 0x200000600000 8388608 00:05:29.619 unregister 0x200000400000 10485760 PASSED 00:05:29.619 passed 00:05:29.619 00:05:29.619 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.619 suites 1 1 n/a 0 0 00:05:29.619 tests 1 1 1 0 0 00:05:29.619 asserts 15 15 15 0 n/a 00:05:29.619 00:05:29.619 Elapsed time = 0.005 seconds 00:05:29.619 00:05:29.619 real 0m0.048s 00:05:29.619 user 0m0.013s 00:05:29.619 sys 0m0.035s 00:05:29.619 16:04:12 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.619 16:04:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 ************************************ 00:05:29.619 END TEST env_mem_callbacks 00:05:29.619 ************************************ 00:05:29.619 00:05:29.619 real 0m6.323s 00:05:29.619 user 0m4.363s 00:05:29.619 sys 0m1.012s 00:05:29.619 16:04:12 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.619 16:04:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 ************************************ 00:05:29.619 END TEST env 00:05:29.619 ************************************ 00:05:29.619 16:04:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.619 16:04:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.619 16:04:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.619 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 ************************************ 00:05:29.619 START TEST rpc 00:05:29.619 ************************************ 00:05:29.619 16:04:12 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.619 * Looking for test storage... 00:05:29.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:29.877 16:04:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=187402 00:05:29.877 16:04:12 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.877 16:04:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.877 16:04:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 187402 00:05:29.877 16:04:12 rpc -- common/autotest_common.sh@827 -- # '[' -z 187402 ']' 00:05:29.877 16:04:12 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.877 16:04:12 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.877 16:04:12 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.878 16:04:12 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.878 16:04:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.878 [2024-07-15 16:04:12.651263] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:29.878 [2024-07-15 16:04:12.651343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187402 ] 00:05:29.878 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.878 [2024-07-15 16:04:12.710532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.878 [2024-07-15 16:04:12.803384] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.878 [2024-07-15 16:04:12.803448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 187402' to capture a snapshot of events at runtime. 00:05:29.878 [2024-07-15 16:04:12.803479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.878 [2024-07-15 16:04:12.803491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.878 [2024-07-15 16:04:12.803501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid187402 for offline analysis/debug. 00:05:29.878 [2024-07-15 16:04:12.803531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.136 16:04:13 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.136 16:04:13 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:30.136 16:04:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.136 16:04:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:30.136 16:04:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.136 16:04:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.136 16:04:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.136 16:04:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.136 16:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.136 ************************************ 00:05:30.136 START TEST rpc_integrity 00:05:30.136 ************************************ 00:05:30.136 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:30.136 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.136 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.136 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.136 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.136 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.136 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.393 { 00:05:30.393 "name": "Malloc0", 00:05:30.393 "aliases": [ 00:05:30.393 "16900158-fc2e-4bd8-ae0c-a49c1294a045" 00:05:30.393 ], 00:05:30.393 "product_name": "Malloc disk", 00:05:30.393 "block_size": 512, 00:05:30.393 "num_blocks": 16384, 00:05:30.393 "uuid": "16900158-fc2e-4bd8-ae0c-a49c1294a045", 00:05:30.393 "assigned_rate_limits": { 00:05:30.393 "rw_ios_per_sec": 0, 00:05:30.393 "rw_mbytes_per_sec": 0, 00:05:30.393 "r_mbytes_per_sec": 0, 00:05:30.393 "w_mbytes_per_sec": 0 00:05:30.393 }, 00:05:30.393 "claimed": false, 00:05:30.393 "zoned": false, 00:05:30.393 "supported_io_types": { 00:05:30.393 "read": true, 00:05:30.393 "write": true, 00:05:30.393 "unmap": true, 00:05:30.393 "write_zeroes": true, 00:05:30.393 "flush": true, 00:05:30.393 "reset": true, 00:05:30.393 "compare": false, 00:05:30.393 "compare_and_write": false, 00:05:30.393 "abort": true, 00:05:30.393 "nvme_admin": false, 00:05:30.393 "nvme_io": false 00:05:30.393 }, 00:05:30.393 "memory_domains": [ 00:05:30.393 { 00:05:30.393 "dma_device_id": "system", 00:05:30.393 "dma_device_type": 1 00:05:30.393 }, 00:05:30.393 { 00:05:30.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.393 "dma_device_type": 2 00:05:30.393 } 00:05:30.393 ], 00:05:30.393 "driver_specific": {} 00:05:30.393 } 00:05:30.393 ]' 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.393 [2024-07-15 16:04:13.174319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.393 [2024-07-15 16:04:13.174372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.393 [2024-07-15 16:04:13.174394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe09460 00:05:30.393 [2024-07-15 16:04:13.174407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.393 [2024-07-15 16:04:13.175688] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.393 [2024-07-15 16:04:13.175709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.393 Passthru0 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.393 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.393 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.394 { 00:05:30.394 "name": "Malloc0", 00:05:30.394 "aliases": [ 00:05:30.394 "16900158-fc2e-4bd8-ae0c-a49c1294a045" 00:05:30.394 ], 00:05:30.394 "product_name": "Malloc disk", 00:05:30.394 "block_size": 512, 00:05:30.394 "num_blocks": 16384, 00:05:30.394 "uuid": "16900158-fc2e-4bd8-ae0c-a49c1294a045", 00:05:30.394 "assigned_rate_limits": { 00:05:30.394 "rw_ios_per_sec": 0, 00:05:30.394 "rw_mbytes_per_sec": 0, 00:05:30.394 "r_mbytes_per_sec": 0, 00:05:30.394 "w_mbytes_per_sec": 0 00:05:30.394 }, 00:05:30.394 "claimed": true, 00:05:30.394 "claim_type": "exclusive_write", 00:05:30.394 "zoned": false, 00:05:30.394 "supported_io_types": { 00:05:30.394 "read": true, 00:05:30.394 "write": true, 00:05:30.394 "unmap": true, 00:05:30.394 "write_zeroes": true, 00:05:30.394 "flush": true, 00:05:30.394 "reset": true, 00:05:30.394 "compare": false, 00:05:30.394 "compare_and_write": false, 00:05:30.394 "abort": true, 00:05:30.394 "nvme_admin": false, 00:05:30.394 "nvme_io": false 00:05:30.394 }, 00:05:30.394 "memory_domains": [ 00:05:30.394 { 00:05:30.394 "dma_device_id": "system", 00:05:30.394 "dma_device_type": 1 00:05:30.394 }, 00:05:30.394 { 00:05:30.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.394 "dma_device_type": 2 00:05:30.394 } 00:05:30.394 ], 00:05:30.394 "driver_specific": {} 00:05:30.394 }, 00:05:30.394 { 00:05:30.394 "name": "Passthru0", 00:05:30.394 "aliases": [ 00:05:30.394 "44fc243b-c96c-5622-8e5c-2d99bd4ce725" 00:05:30.394 ], 00:05:30.394 "product_name": "passthru", 00:05:30.394 "block_size": 512, 00:05:30.394 "num_blocks": 16384, 00:05:30.394 "uuid": "44fc243b-c96c-5622-8e5c-2d99bd4ce725", 00:05:30.394 "assigned_rate_limits": { 00:05:30.394 "rw_ios_per_sec": 0, 00:05:30.394 "rw_mbytes_per_sec": 0, 00:05:30.394 "r_mbytes_per_sec": 0, 00:05:30.394 "w_mbytes_per_sec": 0 00:05:30.394 }, 00:05:30.394 "claimed": false, 00:05:30.394 "zoned": false, 00:05:30.394 "supported_io_types": { 00:05:30.394 "read": true, 00:05:30.394 "write": true, 00:05:30.394 "unmap": true, 00:05:30.394 "write_zeroes": true, 00:05:30.394 "flush": true, 00:05:30.394 "reset": true, 00:05:30.394 "compare": false, 00:05:30.394 "compare_and_write": false, 00:05:30.394 "abort": true, 00:05:30.394 "nvme_admin": false, 00:05:30.394 "nvme_io": false 00:05:30.394 }, 00:05:30.394 "memory_domains": [ 00:05:30.394 { 00:05:30.394 "dma_device_id": "system", 00:05:30.394 "dma_device_type": 1 00:05:30.394 }, 00:05:30.394 { 00:05:30.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.394 "dma_device_type": 2 00:05:30.394 } 00:05:30.394 ], 00:05:30.394 "driver_specific": { 00:05:30.394 "passthru": { 00:05:30.394 "name": "Passthru0", 00:05:30.394 "base_bdev_name": "Malloc0" 00:05:30.394 } 00:05:30.394 } 00:05:30.394 } 00:05:30.394 ]' 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.394 16:04:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.394 00:05:30.394 real 0m0.212s 00:05:30.394 user 0m0.139s 00:05:30.394 sys 0m0.018s 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 ************************************ 00:05:30.394 END TEST rpc_integrity 00:05:30.394 ************************************ 00:05:30.394 16:04:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.394 16:04:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.394 16:04:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.394 16:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 ************************************ 00:05:30.394 START TEST rpc_plugins 00:05:30.394 ************************************ 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:30.394 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.394 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.394 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.394 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.394 { 00:05:30.394 "name": "Malloc1", 00:05:30.394 "aliases": [ 00:05:30.394 "1a87e69e-8cb4-4f56-bdb9-b64beae8fc41" 00:05:30.394 ], 00:05:30.394 "product_name": "Malloc disk", 00:05:30.394 "block_size": 4096, 00:05:30.394 "num_blocks": 256, 00:05:30.394 "uuid": "1a87e69e-8cb4-4f56-bdb9-b64beae8fc41", 00:05:30.394 "assigned_rate_limits": { 00:05:30.394 "rw_ios_per_sec": 0, 00:05:30.394 "rw_mbytes_per_sec": 0, 00:05:30.394 "r_mbytes_per_sec": 0, 00:05:30.394 "w_mbytes_per_sec": 0 00:05:30.394 }, 00:05:30.394 "claimed": false, 00:05:30.394 "zoned": false, 00:05:30.394 "supported_io_types": { 00:05:30.394 "read": true, 00:05:30.394 "write": true, 00:05:30.394 "unmap": true, 00:05:30.394 "write_zeroes": true, 00:05:30.394 "flush": true, 00:05:30.394 "reset": true, 00:05:30.394 "compare": false, 00:05:30.394 "compare_and_write": false, 00:05:30.394 "abort": true, 00:05:30.394 "nvme_admin": false, 00:05:30.394 "nvme_io": false 00:05:30.394 }, 00:05:30.394 "memory_domains": [ 00:05:30.394 { 00:05:30.394 "dma_device_id": "system", 00:05:30.394 "dma_device_type": 1 00:05:30.394 }, 00:05:30.394 { 00:05:30.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.394 "dma_device_type": 2 00:05:30.394 } 00:05:30.394 ], 00:05:30.394 "driver_specific": {} 00:05:30.394 } 00:05:30.394 ]' 00:05:30.394 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.657 16:04:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.657 00:05:30.657 real 0m0.105s 00:05:30.657 user 0m0.070s 00:05:30.657 sys 0m0.008s 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.657 16:04:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.657 ************************************ 00:05:30.657 END TEST rpc_plugins 00:05:30.657 ************************************ 00:05:30.657 16:04:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.657 16:04:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.657 16:04:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.658 16:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.658 ************************************ 00:05:30.658 START TEST rpc_trace_cmd_test 00:05:30.658 ************************************ 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.658 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid187402", 00:05:30.658 "tpoint_group_mask": "0x8", 00:05:30.658 "iscsi_conn": { 00:05:30.658 "mask": "0x2", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "scsi": { 00:05:30.658 "mask": "0x4", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "bdev": { 00:05:30.658 "mask": "0x8", 00:05:30.658 "tpoint_mask": "0xffffffffffffffff" 00:05:30.658 }, 00:05:30.658 "nvmf_rdma": { 00:05:30.658 "mask": "0x10", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "nvmf_tcp": { 00:05:30.658 "mask": "0x20", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "ftl": { 00:05:30.658 "mask": "0x40", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "blobfs": { 00:05:30.658 "mask": "0x80", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "dsa": { 00:05:30.658 "mask": "0x200", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "thread": { 00:05:30.658 "mask": "0x400", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "nvme_pcie": { 00:05:30.658 "mask": "0x800", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "iaa": { 00:05:30.658 "mask": "0x1000", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "nvme_tcp": { 00:05:30.658 "mask": "0x2000", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "bdev_nvme": { 00:05:30.658 "mask": "0x4000", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 }, 00:05:30.658 "sock": { 00:05:30.658 "mask": "0x8000", 00:05:30.658 "tpoint_mask": "0x0" 00:05:30.658 } 00:05:30.658 }' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.658 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.916 16:04:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.916 00:05:30.916 real 0m0.182s 00:05:30.916 user 0m0.163s 00:05:30.916 sys 0m0.012s 00:05:30.916 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 ************************************ 00:05:30.916 END TEST rpc_trace_cmd_test 00:05:30.916 ************************************ 00:05:30.916 16:04:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.916 16:04:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.916 16:04:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.916 16:04:13 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.916 16:04:13 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.916 16:04:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 ************************************ 00:05:30.916 START TEST rpc_daemon_integrity 00:05:30.916 ************************************ 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.916 { 00:05:30.916 "name": "Malloc2", 00:05:30.916 "aliases": [ 00:05:30.916 "4985d774-ed37-4197-9a12-0c02bd8237d3" 00:05:30.916 ], 00:05:30.916 "product_name": "Malloc disk", 00:05:30.916 "block_size": 512, 00:05:30.916 "num_blocks": 16384, 00:05:30.916 "uuid": "4985d774-ed37-4197-9a12-0c02bd8237d3", 00:05:30.916 "assigned_rate_limits": { 00:05:30.916 "rw_ios_per_sec": 0, 00:05:30.916 "rw_mbytes_per_sec": 0, 00:05:30.916 "r_mbytes_per_sec": 0, 00:05:30.916 "w_mbytes_per_sec": 0 00:05:30.916 }, 00:05:30.916 "claimed": false, 00:05:30.916 "zoned": false, 00:05:30.916 "supported_io_types": { 00:05:30.916 "read": true, 00:05:30.916 "write": true, 00:05:30.916 "unmap": true, 00:05:30.916 "write_zeroes": true, 00:05:30.916 "flush": true, 00:05:30.916 "reset": true, 00:05:30.916 "compare": false, 00:05:30.916 "compare_and_write": false, 00:05:30.916 "abort": true, 00:05:30.916 "nvme_admin": false, 00:05:30.916 "nvme_io": false 00:05:30.916 }, 00:05:30.916 "memory_domains": [ 00:05:30.916 { 00:05:30.916 "dma_device_id": "system", 00:05:30.916 "dma_device_type": 1 00:05:30.916 }, 00:05:30.916 { 00:05:30.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.916 "dma_device_type": 2 00:05:30.916 } 00:05:30.916 ], 00:05:30.916 "driver_specific": {} 00:05:30.916 } 00:05:30.916 ]' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 [2024-07-15 16:04:13.804101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:30.916 [2024-07-15 16:04:13.804154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.916 [2024-07-15 16:04:13.804179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe09690 00:05:30.916 [2024-07-15 16:04:13.804192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.916 [2024-07-15 16:04:13.805331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.916 [2024-07-15 16:04:13.805352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.916 Passthru0 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.916 { 00:05:30.916 "name": "Malloc2", 00:05:30.916 "aliases": [ 00:05:30.916 "4985d774-ed37-4197-9a12-0c02bd8237d3" 00:05:30.916 ], 00:05:30.916 "product_name": "Malloc disk", 00:05:30.916 "block_size": 512, 00:05:30.916 "num_blocks": 16384, 00:05:30.916 "uuid": "4985d774-ed37-4197-9a12-0c02bd8237d3", 00:05:30.916 "assigned_rate_limits": { 00:05:30.916 "rw_ios_per_sec": 0, 00:05:30.916 "rw_mbytes_per_sec": 0, 00:05:30.916 "r_mbytes_per_sec": 0, 00:05:30.916 "w_mbytes_per_sec": 0 00:05:30.916 }, 00:05:30.916 "claimed": true, 00:05:30.916 "claim_type": "exclusive_write", 00:05:30.916 "zoned": false, 00:05:30.916 "supported_io_types": { 00:05:30.916 "read": true, 00:05:30.916 "write": true, 00:05:30.916 "unmap": true, 00:05:30.916 "write_zeroes": true, 00:05:30.916 "flush": true, 00:05:30.916 "reset": true, 00:05:30.916 "compare": false, 00:05:30.916 "compare_and_write": false, 00:05:30.916 "abort": true, 00:05:30.916 "nvme_admin": false, 00:05:30.916 "nvme_io": false 00:05:30.916 }, 00:05:30.916 "memory_domains": [ 00:05:30.916 { 00:05:30.916 "dma_device_id": "system", 00:05:30.916 "dma_device_type": 1 00:05:30.916 }, 00:05:30.916 { 00:05:30.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.916 "dma_device_type": 2 00:05:30.916 } 00:05:30.916 ], 00:05:30.916 "driver_specific": {} 00:05:30.916 }, 00:05:30.916 { 00:05:30.916 "name": "Passthru0", 00:05:30.916 "aliases": [ 00:05:30.916 "a66830d7-5d42-58ae-8a8f-b64488f476bd" 00:05:30.916 ], 00:05:30.916 "product_name": "passthru", 00:05:30.916 "block_size": 512, 00:05:30.916 "num_blocks": 16384, 00:05:30.916 "uuid": "a66830d7-5d42-58ae-8a8f-b64488f476bd", 00:05:30.916 "assigned_rate_limits": { 00:05:30.916 "rw_ios_per_sec": 0, 00:05:30.916 "rw_mbytes_per_sec": 0, 00:05:30.916 "r_mbytes_per_sec": 0, 00:05:30.916 "w_mbytes_per_sec": 0 00:05:30.916 }, 00:05:30.916 "claimed": false, 00:05:30.916 "zoned": false, 00:05:30.916 "supported_io_types": { 00:05:30.916 "read": true, 00:05:30.916 "write": true, 00:05:30.916 "unmap": true, 00:05:30.916 "write_zeroes": true, 00:05:30.916 "flush": true, 00:05:30.916 "reset": true, 00:05:30.916 "compare": false, 00:05:30.916 "compare_and_write": false, 00:05:30.916 "abort": true, 00:05:30.916 "nvme_admin": false, 00:05:30.916 "nvme_io": false 00:05:30.916 }, 00:05:30.916 "memory_domains": [ 00:05:30.916 { 00:05:30.916 "dma_device_id": "system", 00:05:30.916 "dma_device_type": 1 00:05:30.916 }, 00:05:30.916 { 00:05:30.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.916 "dma_device_type": 2 00:05:30.916 } 00:05:30.916 ], 00:05:30.916 "driver_specific": { 00:05:30.916 "passthru": { 00:05:30.916 "name": "Passthru0", 00:05:30.916 "base_bdev_name": "Malloc2" 00:05:30.916 } 00:05:30.916 } 00:05:30.916 } 00:05:30.916 ]' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.916 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.917 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.174 16:04:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.174 00:05:31.174 real 0m0.209s 00:05:31.174 user 0m0.136s 00:05:31.174 sys 0m0.018s 00:05:31.174 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.174 16:04:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.174 ************************************ 00:05:31.174 END TEST rpc_daemon_integrity 00:05:31.174 ************************************ 00:05:31.174 16:04:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.174 16:04:13 rpc -- rpc/rpc.sh@84 -- # killprocess 187402 00:05:31.174 16:04:13 rpc -- common/autotest_common.sh@946 -- # '[' -z 187402 ']' 00:05:31.174 16:04:13 rpc -- common/autotest_common.sh@950 -- # kill -0 187402 00:05:31.174 16:04:13 rpc -- common/autotest_common.sh@951 -- # uname 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187402 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187402' 00:05:31.175 killing process with pid 187402 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@965 -- # kill 187402 00:05:31.175 16:04:13 rpc -- common/autotest_common.sh@970 -- # wait 187402 00:05:31.433 00:05:31.433 real 0m1.799s 00:05:31.433 user 0m2.250s 00:05:31.433 sys 0m0.562s 00:05:31.433 16:04:14 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.433 16:04:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.433 ************************************ 00:05:31.433 END TEST rpc 00:05:31.433 ************************************ 00:05:31.434 16:04:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.434 16:04:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.434 16:04:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.434 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:31.434 ************************************ 00:05:31.434 START TEST skip_rpc 00:05:31.434 ************************************ 00:05:31.434 16:04:14 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.692 * Looking for test storage... 00:05:31.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:31.692 16:04:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.692 16:04:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:31.692 16:04:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.692 16:04:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.692 16:04:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.692 16:04:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.692 ************************************ 00:05:31.692 START TEST skip_rpc 00:05:31.692 ************************************ 00:05:31.692 16:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:31.692 16:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=187833 00:05:31.692 16:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.692 16:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.692 16:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.692 [2024-07-15 16:04:14.532538] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:31.692 [2024-07-15 16:04:14.532616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187833 ] 00:05:31.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.692 [2024-07-15 16:04:14.588807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.692 [2024-07-15 16:04:14.668852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 187833 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 187833 ']' 00:05:36.945 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 187833 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187833 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187833' 00:05:36.946 killing process with pid 187833 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 187833 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 187833 00:05:36.946 00:05:36.946 real 0m5.430s 00:05:36.946 user 0m5.149s 00:05:36.946 sys 0m0.289s 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.946 16:04:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.946 ************************************ 00:05:36.946 END TEST skip_rpc 00:05:36.946 ************************************ 00:05:37.204 16:04:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.204 16:04:19 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.204 16:04:19 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.204 16:04:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.204 ************************************ 00:05:37.204 START TEST skip_rpc_with_json 00:05:37.204 ************************************ 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=188520 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 188520 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 188520 ']' 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.204 16:04:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.204 [2024-07-15 16:04:20.014559] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:37.205 [2024-07-15 16:04:20.014661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188520 ] 00:05:37.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.205 [2024-07-15 16:04:20.077980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.205 [2024-07-15 16:04:20.167562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.463 [2024-07-15 16:04:20.399053] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.463 request: 00:05:37.463 { 00:05:37.463 "trtype": "tcp", 00:05:37.463 "method": "nvmf_get_transports", 00:05:37.463 "req_id": 1 00:05:37.463 } 00:05:37.463 Got JSON-RPC error response 00:05:37.463 response: 00:05:37.463 { 00:05:37.463 "code": -19, 00:05:37.463 "message": "No such device" 00:05:37.463 } 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.463 [2024-07-15 16:04:20.407164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.463 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.721 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.721 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.721 { 00:05:37.721 "subsystems": [ 00:05:37.721 { 00:05:37.721 "subsystem": "vfio_user_target", 00:05:37.721 "config": null 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "subsystem": "keyring", 00:05:37.721 "config": [] 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "subsystem": "iobuf", 00:05:37.721 "config": [ 00:05:37.721 { 00:05:37.721 "method": "iobuf_set_options", 00:05:37.721 "params": { 00:05:37.721 "small_pool_count": 8192, 00:05:37.721 "large_pool_count": 1024, 00:05:37.721 "small_bufsize": 8192, 00:05:37.721 "large_bufsize": 135168 00:05:37.721 } 00:05:37.721 } 00:05:37.721 ] 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "subsystem": "sock", 00:05:37.721 "config": [ 00:05:37.721 { 00:05:37.721 "method": "sock_set_default_impl", 00:05:37.721 "params": { 00:05:37.721 "impl_name": "posix" 00:05:37.721 } 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "method": "sock_impl_set_options", 00:05:37.721 "params": { 00:05:37.721 "impl_name": "ssl", 00:05:37.721 "recv_buf_size": 4096, 00:05:37.721 "send_buf_size": 4096, 00:05:37.721 "enable_recv_pipe": true, 00:05:37.721 "enable_quickack": false, 00:05:37.721 "enable_placement_id": 0, 00:05:37.721 "enable_zerocopy_send_server": true, 00:05:37.721 "enable_zerocopy_send_client": false, 00:05:37.721 "zerocopy_threshold": 0, 00:05:37.721 "tls_version": 0, 00:05:37.721 "enable_ktls": false 00:05:37.721 } 00:05:37.721 }, 00:05:37.721 { 00:05:37.721 "method": "sock_impl_set_options", 00:05:37.721 "params": { 00:05:37.722 "impl_name": "posix", 00:05:37.722 "recv_buf_size": 2097152, 00:05:37.722 "send_buf_size": 2097152, 00:05:37.722 "enable_recv_pipe": true, 00:05:37.722 "enable_quickack": false, 00:05:37.722 "enable_placement_id": 0, 00:05:37.722 "enable_zerocopy_send_server": true, 00:05:37.722 "enable_zerocopy_send_client": false, 00:05:37.722 "zerocopy_threshold": 0, 00:05:37.722 "tls_version": 0, 00:05:37.722 "enable_ktls": false 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "vmd", 00:05:37.722 "config": [] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "accel", 00:05:37.722 "config": [ 00:05:37.722 { 00:05:37.722 "method": "accel_set_options", 00:05:37.722 "params": { 00:05:37.722 "small_cache_size": 128, 00:05:37.722 "large_cache_size": 16, 00:05:37.722 "task_count": 2048, 00:05:37.722 "sequence_count": 2048, 00:05:37.722 "buf_count": 2048 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "bdev", 00:05:37.722 "config": [ 00:05:37.722 { 00:05:37.722 "method": "bdev_set_options", 00:05:37.722 "params": { 00:05:37.722 "bdev_io_pool_size": 65535, 00:05:37.722 "bdev_io_cache_size": 256, 00:05:37.722 "bdev_auto_examine": true, 00:05:37.722 "iobuf_small_cache_size": 128, 00:05:37.722 "iobuf_large_cache_size": 16 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "bdev_raid_set_options", 00:05:37.722 "params": { 00:05:37.722 "process_window_size_kb": 1024 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "bdev_iscsi_set_options", 00:05:37.722 "params": { 00:05:37.722 "timeout_sec": 30 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "bdev_nvme_set_options", 00:05:37.722 "params": { 00:05:37.722 "action_on_timeout": "none", 00:05:37.722 "timeout_us": 0, 00:05:37.722 "timeout_admin_us": 0, 00:05:37.722 "keep_alive_timeout_ms": 10000, 00:05:37.722 "arbitration_burst": 0, 00:05:37.722 "low_priority_weight": 0, 00:05:37.722 "medium_priority_weight": 0, 00:05:37.722 "high_priority_weight": 0, 00:05:37.722 "nvme_adminq_poll_period_us": 10000, 00:05:37.722 "nvme_ioq_poll_period_us": 0, 00:05:37.722 "io_queue_requests": 0, 00:05:37.722 "delay_cmd_submit": true, 00:05:37.722 "transport_retry_count": 4, 00:05:37.722 "bdev_retry_count": 3, 00:05:37.722 "transport_ack_timeout": 0, 00:05:37.722 "ctrlr_loss_timeout_sec": 0, 00:05:37.722 "reconnect_delay_sec": 0, 00:05:37.722 "fast_io_fail_timeout_sec": 0, 00:05:37.722 "disable_auto_failback": false, 00:05:37.722 "generate_uuids": false, 00:05:37.722 "transport_tos": 0, 00:05:37.722 "nvme_error_stat": false, 00:05:37.722 "rdma_srq_size": 0, 00:05:37.722 "io_path_stat": false, 00:05:37.722 "allow_accel_sequence": false, 00:05:37.722 "rdma_max_cq_size": 0, 00:05:37.722 "rdma_cm_event_timeout_ms": 0, 00:05:37.722 "dhchap_digests": [ 00:05:37.722 "sha256", 00:05:37.722 "sha384", 00:05:37.722 "sha512" 00:05:37.722 ], 00:05:37.722 "dhchap_dhgroups": [ 00:05:37.722 "null", 00:05:37.722 "ffdhe2048", 00:05:37.722 "ffdhe3072", 00:05:37.722 "ffdhe4096", 00:05:37.722 "ffdhe6144", 00:05:37.722 "ffdhe8192" 00:05:37.722 ] 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "bdev_nvme_set_hotplug", 00:05:37.722 "params": { 00:05:37.722 "period_us": 100000, 00:05:37.722 "enable": false 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "bdev_wait_for_examine" 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "scsi", 00:05:37.722 "config": null 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "scheduler", 00:05:37.722 "config": [ 00:05:37.722 { 00:05:37.722 "method": "framework_set_scheduler", 00:05:37.722 "params": { 00:05:37.722 "name": "static" 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "vhost_scsi", 00:05:37.722 "config": [] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "vhost_blk", 00:05:37.722 "config": [] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "ublk", 00:05:37.722 "config": [] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "nbd", 00:05:37.722 "config": [] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "nvmf", 00:05:37.722 "config": [ 00:05:37.722 { 00:05:37.722 "method": "nvmf_set_config", 00:05:37.722 "params": { 00:05:37.722 "discovery_filter": "match_any", 00:05:37.722 "admin_cmd_passthru": { 00:05:37.722 "identify_ctrlr": false 00:05:37.722 } 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "nvmf_set_max_subsystems", 00:05:37.722 "params": { 00:05:37.722 "max_subsystems": 1024 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "nvmf_set_crdt", 00:05:37.722 "params": { 00:05:37.722 "crdt1": 0, 00:05:37.722 "crdt2": 0, 00:05:37.722 "crdt3": 0 00:05:37.722 } 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "method": "nvmf_create_transport", 00:05:37.722 "params": { 00:05:37.722 "trtype": "TCP", 00:05:37.722 "max_queue_depth": 128, 00:05:37.722 "max_io_qpairs_per_ctrlr": 127, 00:05:37.722 "in_capsule_data_size": 4096, 00:05:37.722 "max_io_size": 131072, 00:05:37.722 "io_unit_size": 131072, 00:05:37.722 "max_aq_depth": 128, 00:05:37.722 "num_shared_buffers": 511, 00:05:37.722 "buf_cache_size": 4294967295, 00:05:37.722 "dif_insert_or_strip": false, 00:05:37.722 "zcopy": false, 00:05:37.722 "c2h_success": true, 00:05:37.722 "sock_priority": 0, 00:05:37.722 "abort_timeout_sec": 1, 00:05:37.722 "ack_timeout": 0, 00:05:37.722 "data_wr_pool_size": 0 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 }, 00:05:37.722 { 00:05:37.722 "subsystem": "iscsi", 00:05:37.722 "config": [ 00:05:37.722 { 00:05:37.722 "method": "iscsi_set_options", 00:05:37.722 "params": { 00:05:37.722 "node_base": "iqn.2016-06.io.spdk", 00:05:37.722 "max_sessions": 128, 00:05:37.722 "max_connections_per_session": 2, 00:05:37.722 "max_queue_depth": 64, 00:05:37.722 "default_time2wait": 2, 00:05:37.722 "default_time2retain": 20, 00:05:37.722 "first_burst_length": 8192, 00:05:37.722 "immediate_data": true, 00:05:37.722 "allow_duplicated_isid": false, 00:05:37.722 "error_recovery_level": 0, 00:05:37.722 "nop_timeout": 60, 00:05:37.722 "nop_in_interval": 30, 00:05:37.722 "disable_chap": false, 00:05:37.722 "require_chap": false, 00:05:37.722 "mutual_chap": false, 00:05:37.722 "chap_group": 0, 00:05:37.722 "max_large_datain_per_connection": 64, 00:05:37.722 "max_r2t_per_connection": 4, 00:05:37.722 "pdu_pool_size": 36864, 00:05:37.722 "immediate_data_pool_size": 16384, 00:05:37.722 "data_out_pool_size": 2048 00:05:37.722 } 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 } 00:05:37.722 ] 00:05:37.722 } 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 188520 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 188520 ']' 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 188520 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 188520 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 188520' 00:05:37.722 killing process with pid 188520 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 188520 00:05:37.722 16:04:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 188520 00:05:38.288 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=188660 00:05:38.288 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.288 16:04:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 188660 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 188660 ']' 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 188660 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.544 16:04:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 188660 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 188660' 00:05:43.544 killing process with pid 188660 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 188660 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 188660 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.544 00:05:43.544 real 0m6.429s 00:05:43.544 user 0m6.051s 00:05:43.544 sys 0m0.652s 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.544 ************************************ 00:05:43.544 END TEST skip_rpc_with_json 00:05:43.544 ************************************ 00:05:43.544 16:04:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:43.544 16:04:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.544 16:04:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.544 16:04:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.544 ************************************ 00:05:43.544 START TEST skip_rpc_with_delay 00:05:43.544 ************************************ 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.544 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.545 [2024-07-15 16:04:26.497335] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.545 [2024-07-15 16:04:26.497445] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.545 00:05:43.545 real 0m0.070s 00:05:43.545 user 0m0.044s 00:05:43.545 sys 0m0.026s 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.545 16:04:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.545 ************************************ 00:05:43.545 END TEST skip_rpc_with_delay 00:05:43.545 ************************************ 00:05:43.803 16:04:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.803 16:04:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.803 16:04:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.803 16:04:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.803 16:04:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.803 16:04:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.803 ************************************ 00:05:43.803 START TEST exit_on_failed_rpc_init 00:05:43.803 ************************************ 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=189378 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 189378 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 189378 ']' 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.803 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.803 [2024-07-15 16:04:26.618919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:43.803 [2024-07-15 16:04:26.619000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189378 ] 00:05:43.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.803 [2024-07-15 16:04:26.676802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.803 [2024-07-15 16:04:26.756173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.063 16:04:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:44.063 [2024-07-15 16:04:27.034112] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:44.063 [2024-07-15 16:04:27.034195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189387 ] 00:05:44.365 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.365 [2024-07-15 16:04:27.096114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.365 [2024-07-15 16:04:27.184210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.365 [2024-07-15 16:04:27.184341] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.365 [2024-07-15 16:04:27.184361] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.365 [2024-07-15 16:04:27.184372] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 189378 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 189378 ']' 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 189378 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 189378 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 189378' 00:05:44.365 killing process with pid 189378 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 189378 00:05:44.365 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 189378 00:05:44.953 00:05:44.953 real 0m1.114s 00:05:44.953 user 0m1.221s 00:05:44.953 sys 0m0.418s 00:05:44.953 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.953 16:04:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 ************************************ 00:05:44.953 END TEST exit_on_failed_rpc_init 00:05:44.953 ************************************ 00:05:44.953 16:04:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.953 00:05:44.953 real 0m13.307s 00:05:44.953 user 0m12.562s 00:05:44.953 sys 0m1.569s 00:05:44.953 16:04:27 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.953 16:04:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 ************************************ 00:05:44.953 END TEST skip_rpc 00:05:44.953 ************************************ 00:05:44.953 16:04:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.953 16:04:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.953 16:04:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.953 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 ************************************ 00:05:44.953 START TEST rpc_client 00:05:44.953 ************************************ 00:05:44.953 16:04:27 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.953 * Looking for test storage... 00:05:44.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:44.953 16:04:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:44.953 OK 00:05:44.953 16:04:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.953 00:05:44.953 real 0m0.071s 00:05:44.953 user 0m0.027s 00:05:44.953 sys 0m0.049s 00:05:44.953 16:04:27 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.953 16:04:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 ************************************ 00:05:44.953 END TEST rpc_client 00:05:44.953 ************************************ 00:05:44.953 16:04:27 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.953 16:04:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.953 16:04:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.953 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:44.953 ************************************ 00:05:44.953 START TEST json_config 00:05:44.953 ************************************ 00:05:44.953 16:04:27 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.953 16:04:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.953 16:04:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.953 16:04:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.953 16:04:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.953 16:04:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.953 16:04:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.953 16:04:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.953 16:04:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.953 16:04:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.954 16:04:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@47 -- # : 0 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.954 16:04:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:44.954 16:04:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:45.212 INFO: JSON configuration test init 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.212 16:04:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.212 16:04:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:45.212 16:04:27 json_config -- json_config/common.sh@10 -- # shift 00:05:45.212 16:04:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.212 16:04:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.212 16:04:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.212 16:04:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.212 16:04:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.212 16:04:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=189628 00:05:45.212 16:04:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.212 16:04:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.212 Waiting for target to run... 00:05:45.212 16:04:27 json_config -- json_config/common.sh@25 -- # waitforlisten 189628 /var/tmp/spdk_tgt.sock 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@827 -- # '[' -z 189628 ']' 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.212 16:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.212 [2024-07-15 16:04:27.988008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:45.212 [2024-07-15 16:04:27.988125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189628 ] 00:05:45.212 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.471 [2024-07-15 16:04:28.325683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.471 [2024-07-15 16:04:28.381818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:46.035 16:04:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:46.035 00:05:46.035 16:04:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:46.035 16:04:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.035 16:04:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:46.035 16:04:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:46.035 16:04:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.036 16:04:28 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.036 16:04:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:46.036 16:04:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:49.318 16:04:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.318 16:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:49.318 16:04:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:49.318 16:04:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:49.575 16:04:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.575 16:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:49.575 16:04:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:49.575 16:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:49.575 16:04:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.575 16:04:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.868 MallocForNvmf0 00:05:49.868 16:04:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.868 16:04:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.868 MallocForNvmf1 00:05:49.868 16:04:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.868 16:04:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.125 [2024-07-15 16:04:33.055573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.125 16:04:33 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.125 16:04:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.381 16:04:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.381 16:04:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:50.638 16:04:33 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.638 16:04:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.895 16:04:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.895 16:04:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.152 [2024-07-15 16:04:34.026645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:51.152 16:04:34 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:51.152 16:04:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.152 16:04:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.152 16:04:34 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:51.152 16:04:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.152 16:04:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.152 16:04:34 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:51.152 16:04:34 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.152 16:04:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:51.410 MallocBdevForConfigChangeCheck 00:05:51.410 16:04:34 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:51.410 16:04:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:51.410 16:04:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.410 16:04:34 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:51.410 16:04:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.974 16:04:34 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:51.974 INFO: shutting down applications... 00:05:51.974 16:04:34 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:51.974 16:04:34 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:51.974 16:04:34 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:51.974 16:04:34 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.871 Calling clear_iscsi_subsystem 00:05:53.871 Calling clear_nvmf_subsystem 00:05:53.871 Calling clear_nbd_subsystem 00:05:53.871 Calling clear_ublk_subsystem 00:05:53.871 Calling clear_vhost_blk_subsystem 00:05:53.871 Calling clear_vhost_scsi_subsystem 00:05:53.871 Calling clear_bdev_subsystem 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@345 -- # break 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:53.871 16:04:36 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:53.871 16:04:36 json_config -- json_config/common.sh@31 -- # local app=target 00:05:53.871 16:04:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:53.871 16:04:36 json_config -- json_config/common.sh@35 -- # [[ -n 189628 ]] 00:05:53.871 16:04:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 189628 00:05:53.871 16:04:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:53.871 16:04:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.871 16:04:36 json_config -- json_config/common.sh@41 -- # kill -0 189628 00:05:53.871 16:04:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.439 16:04:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.439 16:04:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.439 16:04:37 json_config -- json_config/common.sh@41 -- # kill -0 189628 00:05:54.439 16:04:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.439 16:04:37 json_config -- json_config/common.sh@43 -- # break 00:05:54.439 16:04:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.439 16:04:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.439 SPDK target shutdown done 00:05:54.439 16:04:37 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:54.439 INFO: relaunching applications... 00:05:54.439 16:04:37 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.439 16:04:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.439 16:04:37 json_config -- json_config/common.sh@10 -- # shift 00:05:54.439 16:04:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.439 16:04:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.439 16:04:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.439 16:04:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.439 16:04:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.439 16:04:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=190820 00:05:54.439 16:04:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.439 16:04:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.439 Waiting for target to run... 00:05:54.439 16:04:37 json_config -- json_config/common.sh@25 -- # waitforlisten 190820 /var/tmp/spdk_tgt.sock 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@827 -- # '[' -z 190820 ']' 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.439 16:04:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.439 [2024-07-15 16:04:37.313909] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.439 [2024-07-15 16:04:37.314000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190820 ] 00:05:54.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.005 [2024-07-15 16:04:37.864295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.005 [2024-07-15 16:04:37.935137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.278 [2024-07-15 16:04:40.956721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.278 [2024-07-15 16:04:40.989194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:58.843 16:04:41 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.843 16:04:41 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:58.843 16:04:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.843 00:05:58.843 16:04:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:58.843 16:04:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:58.843 INFO: Checking if target configuration is the same... 00:05:58.843 16:04:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.843 16:04:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:58.843 16:04:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.843 + '[' 2 -ne 2 ']' 00:05:58.843 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:58.843 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:58.843 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.843 +++ basename /dev/fd/62 00:05:58.843 ++ mktemp /tmp/62.XXX 00:05:58.843 + tmp_file_1=/tmp/62.lop 00:05:58.843 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:58.843 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.843 + tmp_file_2=/tmp/spdk_tgt_config.json.DgC 00:05:58.843 + ret=0 00:05:58.843 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.408 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.408 + diff -u /tmp/62.lop /tmp/spdk_tgt_config.json.DgC 00:05:59.408 + echo 'INFO: JSON config files are the same' 00:05:59.408 INFO: JSON config files are the same 00:05:59.408 + rm /tmp/62.lop /tmp/spdk_tgt_config.json.DgC 00:05:59.408 + exit 0 00:05:59.408 16:04:42 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:59.408 16:04:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:59.408 INFO: changing configuration and checking if this can be detected... 00:05:59.408 16:04:42 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.408 16:04:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.667 16:04:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.667 16:04:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:59.667 16:04:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.667 + '[' 2 -ne 2 ']' 00:05:59.667 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.667 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.667 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.667 +++ basename /dev/fd/62 00:05:59.667 ++ mktemp /tmp/62.XXX 00:05:59.667 + tmp_file_1=/tmp/62.t7x 00:05:59.667 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.667 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.667 + tmp_file_2=/tmp/spdk_tgt_config.json.N0n 00:05:59.667 + ret=0 00:05:59.667 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.925 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.925 + diff -u /tmp/62.t7x /tmp/spdk_tgt_config.json.N0n 00:05:59.925 + ret=1 00:05:59.925 + echo '=== Start of file: /tmp/62.t7x ===' 00:05:59.925 + cat /tmp/62.t7x 00:05:59.925 + echo '=== End of file: /tmp/62.t7x ===' 00:05:59.925 + echo '' 00:05:59.925 + echo '=== Start of file: /tmp/spdk_tgt_config.json.N0n ===' 00:05:59.925 + cat /tmp/spdk_tgt_config.json.N0n 00:05:59.925 + echo '=== End of file: /tmp/spdk_tgt_config.json.N0n ===' 00:05:59.925 + echo '' 00:05:59.925 + rm /tmp/62.t7x /tmp/spdk_tgt_config.json.N0n 00:05:59.925 + exit 1 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:59.925 INFO: configuration change detected. 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 190820 ]] 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.925 16:04:42 json_config -- json_config/json_config.sh@323 -- # killprocess 190820 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@946 -- # '[' -z 190820 ']' 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@950 -- # kill -0 190820 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@951 -- # uname 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 190820 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 190820' 00:05:59.925 killing process with pid 190820 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@965 -- # kill 190820 00:05:59.925 16:04:42 json_config -- common/autotest_common.sh@970 -- # wait 190820 00:06:01.821 16:04:44 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.821 16:04:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:01.821 16:04:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.821 16:04:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.821 16:04:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:01.821 16:04:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:01.821 INFO: Success 00:06:01.821 00:06:01.821 real 0m16.625s 00:06:01.821 user 0m18.437s 00:06:01.821 sys 0m2.059s 00:06:01.821 16:04:44 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.821 16:04:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.821 ************************************ 00:06:01.821 END TEST json_config 00:06:01.821 ************************************ 00:06:01.821 16:04:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.821 16:04:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.821 16:04:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.821 16:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:01.821 ************************************ 00:06:01.821 START TEST json_config_extra_key 00:06:01.821 ************************************ 00:06:01.821 16:04:44 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.821 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.821 16:04:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.822 16:04:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.822 16:04:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.822 16:04:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.822 16:04:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.822 16:04:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.822 16:04:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.822 16:04:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:01.822 16:04:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.822 16:04:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:01.822 INFO: launching applications... 00:06:01.822 16:04:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=191867 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.822 Waiting for target to run... 00:06:01.822 16:04:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 191867 /var/tmp/spdk_tgt.sock 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 191867 ']' 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.822 16:04:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.822 [2024-07-15 16:04:44.646898] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:01.822 [2024-07-15 16:04:44.646986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191867 ] 00:06:01.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.391 [2024-07-15 16:04:45.151386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.391 [2024-07-15 16:04:45.225300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.649 16:04:45 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.649 16:04:45 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:02.649 00:06:02.649 16:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:02.649 INFO: shutting down applications... 00:06:02.649 16:04:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 191867 ]] 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 191867 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 191867 00:06:02.649 16:04:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 191867 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.214 16:04:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.214 SPDK target shutdown done 00:06:03.214 16:04:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:03.214 Success 00:06:03.214 00:06:03.214 real 0m1.544s 00:06:03.214 user 0m1.343s 00:06:03.214 sys 0m0.586s 00:06:03.214 16:04:46 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.214 16:04:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.214 ************************************ 00:06:03.214 END TEST json_config_extra_key 00:06:03.214 ************************************ 00:06:03.215 16:04:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.215 16:04:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.215 16:04:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.215 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.215 ************************************ 00:06:03.215 START TEST alias_rpc 00:06:03.215 ************************************ 00:06:03.215 16:04:46 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.473 * Looking for test storage... 00:06:03.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:03.473 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.473 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=192050 00:06:03.473 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.473 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 192050 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 192050 ']' 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.473 16:04:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.473 [2024-07-15 16:04:46.252892] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:03.473 [2024-07-15 16:04:46.252972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192050 ] 00:06:03.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.473 [2024-07-15 16:04:46.310775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.473 [2024-07-15 16:04:46.395433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.731 16:04:46 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.731 16:04:46 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.731 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:03.989 16:04:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 192050 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 192050 ']' 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 192050 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 192050 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 192050' 00:06:03.989 killing process with pid 192050 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@965 -- # kill 192050 00:06:03.989 16:04:46 alias_rpc -- common/autotest_common.sh@970 -- # wait 192050 00:06:04.556 00:06:04.556 real 0m1.187s 00:06:04.556 user 0m1.259s 00:06:04.556 sys 0m0.419s 00:06:04.556 16:04:47 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.556 16:04:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.556 ************************************ 00:06:04.556 END TEST alias_rpc 00:06:04.556 ************************************ 00:06:04.556 16:04:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:04.556 16:04:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.556 16:04:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.556 16:04:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.556 16:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:04.556 ************************************ 00:06:04.556 START TEST spdkcli_tcp 00:06:04.556 ************************************ 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:04.556 * Looking for test storage... 00:06:04.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=192237 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.556 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 192237 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 192237 ']' 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.556 16:04:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.556 [2024-07-15 16:04:47.496588] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:04.556 [2024-07-15 16:04:47.496679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192237 ] 00:06:04.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.814 [2024-07-15 16:04:47.556914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.814 [2024-07-15 16:04:47.642030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.814 [2024-07-15 16:04:47.642034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.072 16:04:47 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.072 16:04:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:05.072 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=192364 00:06:05.072 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.072 16:04:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:05.331 [ 00:06:05.331 "bdev_malloc_delete", 00:06:05.331 "bdev_malloc_create", 00:06:05.331 "bdev_null_resize", 00:06:05.331 "bdev_null_delete", 00:06:05.331 "bdev_null_create", 00:06:05.331 "bdev_nvme_cuse_unregister", 00:06:05.331 "bdev_nvme_cuse_register", 00:06:05.331 "bdev_opal_new_user", 00:06:05.331 "bdev_opal_set_lock_state", 00:06:05.331 "bdev_opal_delete", 00:06:05.331 "bdev_opal_get_info", 00:06:05.331 "bdev_opal_create", 00:06:05.331 "bdev_nvme_opal_revert", 00:06:05.331 "bdev_nvme_opal_init", 00:06:05.331 "bdev_nvme_send_cmd", 00:06:05.331 "bdev_nvme_get_path_iostat", 00:06:05.331 "bdev_nvme_get_mdns_discovery_info", 00:06:05.331 "bdev_nvme_stop_mdns_discovery", 00:06:05.331 "bdev_nvme_start_mdns_discovery", 00:06:05.331 "bdev_nvme_set_multipath_policy", 00:06:05.331 "bdev_nvme_set_preferred_path", 00:06:05.331 "bdev_nvme_get_io_paths", 00:06:05.331 "bdev_nvme_remove_error_injection", 00:06:05.331 "bdev_nvme_add_error_injection", 00:06:05.331 "bdev_nvme_get_discovery_info", 00:06:05.331 "bdev_nvme_stop_discovery", 00:06:05.331 "bdev_nvme_start_discovery", 00:06:05.331 "bdev_nvme_get_controller_health_info", 00:06:05.331 "bdev_nvme_disable_controller", 00:06:05.331 "bdev_nvme_enable_controller", 00:06:05.331 "bdev_nvme_reset_controller", 00:06:05.331 "bdev_nvme_get_transport_statistics", 00:06:05.331 "bdev_nvme_apply_firmware", 00:06:05.331 "bdev_nvme_detach_controller", 00:06:05.331 "bdev_nvme_get_controllers", 00:06:05.331 "bdev_nvme_attach_controller", 00:06:05.331 "bdev_nvme_set_hotplug", 00:06:05.331 "bdev_nvme_set_options", 00:06:05.331 "bdev_passthru_delete", 00:06:05.331 "bdev_passthru_create", 00:06:05.331 "bdev_lvol_set_parent_bdev", 00:06:05.331 "bdev_lvol_set_parent", 00:06:05.331 "bdev_lvol_check_shallow_copy", 00:06:05.331 "bdev_lvol_start_shallow_copy", 00:06:05.331 "bdev_lvol_grow_lvstore", 00:06:05.331 "bdev_lvol_get_lvols", 00:06:05.331 "bdev_lvol_get_lvstores", 00:06:05.331 "bdev_lvol_delete", 00:06:05.331 "bdev_lvol_set_read_only", 00:06:05.331 "bdev_lvol_resize", 00:06:05.331 "bdev_lvol_decouple_parent", 00:06:05.331 "bdev_lvol_inflate", 00:06:05.331 "bdev_lvol_rename", 00:06:05.331 "bdev_lvol_clone_bdev", 00:06:05.331 "bdev_lvol_clone", 00:06:05.331 "bdev_lvol_snapshot", 00:06:05.331 "bdev_lvol_create", 00:06:05.331 "bdev_lvol_delete_lvstore", 00:06:05.331 "bdev_lvol_rename_lvstore", 00:06:05.331 "bdev_lvol_create_lvstore", 00:06:05.331 "bdev_raid_set_options", 00:06:05.331 "bdev_raid_remove_base_bdev", 00:06:05.331 "bdev_raid_add_base_bdev", 00:06:05.331 "bdev_raid_delete", 00:06:05.331 "bdev_raid_create", 00:06:05.331 "bdev_raid_get_bdevs", 00:06:05.331 "bdev_error_inject_error", 00:06:05.331 "bdev_error_delete", 00:06:05.331 "bdev_error_create", 00:06:05.331 "bdev_split_delete", 00:06:05.331 "bdev_split_create", 00:06:05.331 "bdev_delay_delete", 00:06:05.331 "bdev_delay_create", 00:06:05.331 "bdev_delay_update_latency", 00:06:05.331 "bdev_zone_block_delete", 00:06:05.331 "bdev_zone_block_create", 00:06:05.331 "blobfs_create", 00:06:05.331 "blobfs_detect", 00:06:05.331 "blobfs_set_cache_size", 00:06:05.331 "bdev_aio_delete", 00:06:05.331 "bdev_aio_rescan", 00:06:05.331 "bdev_aio_create", 00:06:05.331 "bdev_ftl_set_property", 00:06:05.331 "bdev_ftl_get_properties", 00:06:05.331 "bdev_ftl_get_stats", 00:06:05.331 "bdev_ftl_unmap", 00:06:05.331 "bdev_ftl_unload", 00:06:05.331 "bdev_ftl_delete", 00:06:05.331 "bdev_ftl_load", 00:06:05.331 "bdev_ftl_create", 00:06:05.331 "bdev_virtio_attach_controller", 00:06:05.331 "bdev_virtio_scsi_get_devices", 00:06:05.331 "bdev_virtio_detach_controller", 00:06:05.331 "bdev_virtio_blk_set_hotplug", 00:06:05.331 "bdev_iscsi_delete", 00:06:05.331 "bdev_iscsi_create", 00:06:05.331 "bdev_iscsi_set_options", 00:06:05.331 "accel_error_inject_error", 00:06:05.331 "ioat_scan_accel_module", 00:06:05.331 "dsa_scan_accel_module", 00:06:05.331 "iaa_scan_accel_module", 00:06:05.331 "vfu_virtio_create_scsi_endpoint", 00:06:05.331 "vfu_virtio_scsi_remove_target", 00:06:05.331 "vfu_virtio_scsi_add_target", 00:06:05.331 "vfu_virtio_create_blk_endpoint", 00:06:05.331 "vfu_virtio_delete_endpoint", 00:06:05.331 "keyring_file_remove_key", 00:06:05.331 "keyring_file_add_key", 00:06:05.331 "keyring_linux_set_options", 00:06:05.331 "iscsi_get_histogram", 00:06:05.331 "iscsi_enable_histogram", 00:06:05.331 "iscsi_set_options", 00:06:05.331 "iscsi_get_auth_groups", 00:06:05.331 "iscsi_auth_group_remove_secret", 00:06:05.331 "iscsi_auth_group_add_secret", 00:06:05.331 "iscsi_delete_auth_group", 00:06:05.331 "iscsi_create_auth_group", 00:06:05.331 "iscsi_set_discovery_auth", 00:06:05.331 "iscsi_get_options", 00:06:05.331 "iscsi_target_node_request_logout", 00:06:05.331 "iscsi_target_node_set_redirect", 00:06:05.331 "iscsi_target_node_set_auth", 00:06:05.331 "iscsi_target_node_add_lun", 00:06:05.331 "iscsi_get_stats", 00:06:05.331 "iscsi_get_connections", 00:06:05.331 "iscsi_portal_group_set_auth", 00:06:05.331 "iscsi_start_portal_group", 00:06:05.331 "iscsi_delete_portal_group", 00:06:05.331 "iscsi_create_portal_group", 00:06:05.331 "iscsi_get_portal_groups", 00:06:05.331 "iscsi_delete_target_node", 00:06:05.331 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.331 "iscsi_target_node_add_pg_ig_maps", 00:06:05.331 "iscsi_create_target_node", 00:06:05.331 "iscsi_get_target_nodes", 00:06:05.331 "iscsi_delete_initiator_group", 00:06:05.331 "iscsi_initiator_group_remove_initiators", 00:06:05.331 "iscsi_initiator_group_add_initiators", 00:06:05.331 "iscsi_create_initiator_group", 00:06:05.331 "iscsi_get_initiator_groups", 00:06:05.331 "nvmf_set_crdt", 00:06:05.331 "nvmf_set_config", 00:06:05.331 "nvmf_set_max_subsystems", 00:06:05.331 "nvmf_stop_mdns_prr", 00:06:05.331 "nvmf_publish_mdns_prr", 00:06:05.331 "nvmf_subsystem_get_listeners", 00:06:05.331 "nvmf_subsystem_get_qpairs", 00:06:05.331 "nvmf_subsystem_get_controllers", 00:06:05.331 "nvmf_get_stats", 00:06:05.331 "nvmf_get_transports", 00:06:05.331 "nvmf_create_transport", 00:06:05.331 "nvmf_get_targets", 00:06:05.331 "nvmf_delete_target", 00:06:05.331 "nvmf_create_target", 00:06:05.331 "nvmf_subsystem_allow_any_host", 00:06:05.332 "nvmf_subsystem_remove_host", 00:06:05.332 "nvmf_subsystem_add_host", 00:06:05.332 "nvmf_ns_remove_host", 00:06:05.332 "nvmf_ns_add_host", 00:06:05.332 "nvmf_subsystem_remove_ns", 00:06:05.332 "nvmf_subsystem_add_ns", 00:06:05.332 "nvmf_subsystem_listener_set_ana_state", 00:06:05.332 "nvmf_discovery_get_referrals", 00:06:05.332 "nvmf_discovery_remove_referral", 00:06:05.332 "nvmf_discovery_add_referral", 00:06:05.332 "nvmf_subsystem_remove_listener", 00:06:05.332 "nvmf_subsystem_add_listener", 00:06:05.332 "nvmf_delete_subsystem", 00:06:05.332 "nvmf_create_subsystem", 00:06:05.332 "nvmf_get_subsystems", 00:06:05.332 "env_dpdk_get_mem_stats", 00:06:05.332 "nbd_get_disks", 00:06:05.332 "nbd_stop_disk", 00:06:05.332 "nbd_start_disk", 00:06:05.332 "ublk_recover_disk", 00:06:05.332 "ublk_get_disks", 00:06:05.332 "ublk_stop_disk", 00:06:05.332 "ublk_start_disk", 00:06:05.332 "ublk_destroy_target", 00:06:05.332 "ublk_create_target", 00:06:05.332 "virtio_blk_create_transport", 00:06:05.332 "virtio_blk_get_transports", 00:06:05.332 "vhost_controller_set_coalescing", 00:06:05.332 "vhost_get_controllers", 00:06:05.332 "vhost_delete_controller", 00:06:05.332 "vhost_create_blk_controller", 00:06:05.332 "vhost_scsi_controller_remove_target", 00:06:05.332 "vhost_scsi_controller_add_target", 00:06:05.332 "vhost_start_scsi_controller", 00:06:05.332 "vhost_create_scsi_controller", 00:06:05.332 "thread_set_cpumask", 00:06:05.332 "framework_get_scheduler", 00:06:05.332 "framework_set_scheduler", 00:06:05.332 "framework_get_reactors", 00:06:05.332 "thread_get_io_channels", 00:06:05.332 "thread_get_pollers", 00:06:05.332 "thread_get_stats", 00:06:05.332 "framework_monitor_context_switch", 00:06:05.332 "spdk_kill_instance", 00:06:05.332 "log_enable_timestamps", 00:06:05.332 "log_get_flags", 00:06:05.332 "log_clear_flag", 00:06:05.332 "log_set_flag", 00:06:05.332 "log_get_level", 00:06:05.332 "log_set_level", 00:06:05.332 "log_get_print_level", 00:06:05.332 "log_set_print_level", 00:06:05.332 "framework_enable_cpumask_locks", 00:06:05.332 "framework_disable_cpumask_locks", 00:06:05.332 "framework_wait_init", 00:06:05.332 "framework_start_init", 00:06:05.332 "scsi_get_devices", 00:06:05.332 "bdev_get_histogram", 00:06:05.332 "bdev_enable_histogram", 00:06:05.332 "bdev_set_qos_limit", 00:06:05.332 "bdev_set_qd_sampling_period", 00:06:05.332 "bdev_get_bdevs", 00:06:05.332 "bdev_reset_iostat", 00:06:05.332 "bdev_get_iostat", 00:06:05.332 "bdev_examine", 00:06:05.332 "bdev_wait_for_examine", 00:06:05.332 "bdev_set_options", 00:06:05.332 "notify_get_notifications", 00:06:05.332 "notify_get_types", 00:06:05.332 "accel_get_stats", 00:06:05.332 "accel_set_options", 00:06:05.332 "accel_set_driver", 00:06:05.332 "accel_crypto_key_destroy", 00:06:05.332 "accel_crypto_keys_get", 00:06:05.332 "accel_crypto_key_create", 00:06:05.332 "accel_assign_opc", 00:06:05.332 "accel_get_module_info", 00:06:05.332 "accel_get_opc_assignments", 00:06:05.332 "vmd_rescan", 00:06:05.332 "vmd_remove_device", 00:06:05.332 "vmd_enable", 00:06:05.332 "sock_get_default_impl", 00:06:05.332 "sock_set_default_impl", 00:06:05.332 "sock_impl_set_options", 00:06:05.332 "sock_impl_get_options", 00:06:05.332 "iobuf_get_stats", 00:06:05.332 "iobuf_set_options", 00:06:05.332 "keyring_get_keys", 00:06:05.332 "framework_get_pci_devices", 00:06:05.332 "framework_get_config", 00:06:05.332 "framework_get_subsystems", 00:06:05.332 "vfu_tgt_set_base_path", 00:06:05.332 "trace_get_info", 00:06:05.332 "trace_get_tpoint_group_mask", 00:06:05.332 "trace_disable_tpoint_group", 00:06:05.332 "trace_enable_tpoint_group", 00:06:05.332 "trace_clear_tpoint_mask", 00:06:05.332 "trace_set_tpoint_mask", 00:06:05.332 "spdk_get_version", 00:06:05.332 "rpc_get_methods" 00:06:05.332 ] 00:06:05.332 16:04:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 16:04:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.332 16:04:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 192237 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 192237 ']' 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 192237 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 192237 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 192237' 00:06:05.332 killing process with pid 192237 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 192237 00:06:05.332 16:04:48 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 192237 00:06:05.900 00:06:05.900 real 0m1.189s 00:06:05.900 user 0m2.096s 00:06:05.900 sys 0m0.448s 00:06:05.900 16:04:48 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.900 16:04:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.900 ************************************ 00:06:05.900 END TEST spdkcli_tcp 00:06:05.900 ************************************ 00:06:05.900 16:04:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.900 16:04:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.900 16:04:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.900 16:04:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.900 ************************************ 00:06:05.900 START TEST dpdk_mem_utility 00:06:05.900 ************************************ 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.900 * Looking for test storage... 00:06:05.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:05.900 16:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.900 16:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=192443 00:06:05.900 16:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.900 16:04:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 192443 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 192443 ']' 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.900 16:04:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.900 [2024-07-15 16:04:48.732505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:05.900 [2024-07-15 16:04:48.732583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192443 ] 00:06:05.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.900 [2024-07-15 16:04:48.789865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.900 [2024-07-15 16:04:48.876495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.158 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.158 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:06.158 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:06.158 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:06.158 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.158 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.158 { 00:06:06.158 "filename": "/tmp/spdk_mem_dump.txt" 00:06:06.158 } 00:06:06.158 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.158 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.416 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:06.416 1 heaps totaling size 814.000000 MiB 00:06:06.416 size: 814.000000 MiB heap id: 0 00:06:06.416 end heaps---------- 00:06:06.416 8 mempools totaling size 598.116089 MiB 00:06:06.416 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:06.416 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:06.417 size: 84.521057 MiB name: bdev_io_192443 00:06:06.417 size: 51.011292 MiB name: evtpool_192443 00:06:06.417 size: 50.003479 MiB name: msgpool_192443 00:06:06.417 size: 21.763794 MiB name: PDU_Pool 00:06:06.417 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:06.417 size: 0.026123 MiB name: Session_Pool 00:06:06.417 end mempools------- 00:06:06.417 6 memzones totaling size 4.142822 MiB 00:06:06.417 size: 1.000366 MiB name: RG_ring_0_192443 00:06:06.417 size: 1.000366 MiB name: RG_ring_1_192443 00:06:06.417 size: 1.000366 MiB name: RG_ring_4_192443 00:06:06.417 size: 1.000366 MiB name: RG_ring_5_192443 00:06:06.417 size: 0.125366 MiB name: RG_ring_2_192443 00:06:06.417 size: 0.015991 MiB name: RG_ring_3_192443 00:06:06.417 end memzones------- 00:06:06.417 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:06.417 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:06.417 list of free elements. size: 12.519348 MiB 00:06:06.417 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:06.417 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:06.417 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:06.417 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:06.417 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:06.417 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:06.417 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:06.417 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:06.417 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:06.417 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:06.417 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:06.417 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:06.417 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:06.417 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:06.417 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:06.417 list of standard malloc elements. size: 199.218079 MiB 00:06:06.417 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:06.417 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:06.417 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:06.417 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:06.417 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:06.417 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:06.417 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:06.417 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:06.417 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:06.417 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:06.417 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:06.417 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:06.417 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:06.417 list of memzone associated elements. size: 602.262573 MiB 00:06:06.417 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:06.417 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:06.417 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:06.417 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:06.417 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:06.417 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_192443_0 00:06:06.417 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:06.417 associated memzone info: size: 48.002930 MiB name: MP_evtpool_192443_0 00:06:06.417 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:06.417 associated memzone info: size: 48.002930 MiB name: MP_msgpool_192443_0 00:06:06.417 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:06.417 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:06.417 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:06.417 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:06.417 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:06.417 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_192443 00:06:06.417 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:06.417 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_192443 00:06:06.417 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:06.417 associated memzone info: size: 1.007996 MiB name: MP_evtpool_192443 00:06:06.417 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:06.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:06.417 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:06.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:06.417 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:06.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:06.417 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:06.417 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:06.417 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:06.417 associated memzone info: size: 1.000366 MiB name: RG_ring_0_192443 00:06:06.417 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:06.417 associated memzone info: size: 1.000366 MiB name: RG_ring_1_192443 00:06:06.417 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:06.417 associated memzone info: size: 1.000366 MiB name: RG_ring_4_192443 00:06:06.417 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:06.417 associated memzone info: size: 1.000366 MiB name: RG_ring_5_192443 00:06:06.417 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:06.417 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_192443 00:06:06.417 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:06.417 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:06.417 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:06.417 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:06.417 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:06.417 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:06.417 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:06.417 associated memzone info: size: 0.125366 MiB name: RG_ring_2_192443 00:06:06.417 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:06.417 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:06.417 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:06.417 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:06.417 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:06.417 associated memzone info: size: 0.015991 MiB name: RG_ring_3_192443 00:06:06.417 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:06.417 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:06.417 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:06.417 associated memzone info: size: 0.000183 MiB name: MP_msgpool_192443 00:06:06.417 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:06.417 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_192443 00:06:06.417 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:06.417 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:06.417 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:06.417 16:04:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 192443 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 192443 ']' 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 192443 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 192443 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 192443' 00:06:06.417 killing process with pid 192443 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 192443 00:06:06.417 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 192443 00:06:06.676 00:06:06.676 real 0m1.015s 00:06:06.676 user 0m0.982s 00:06:06.676 sys 0m0.393s 00:06:06.676 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.676 16:04:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.676 ************************************ 00:06:06.676 END TEST dpdk_mem_utility 00:06:06.676 ************************************ 00:06:06.934 16:04:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.934 16:04:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.934 16:04:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.934 16:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:06.934 ************************************ 00:06:06.934 START TEST event 00:06:06.934 ************************************ 00:06:06.934 16:04:49 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:06.934 * Looking for test storage... 00:06:06.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:06.934 16:04:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.934 16:04:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.934 16:04:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.934 16:04:49 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:06.934 16:04:49 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.934 16:04:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.934 ************************************ 00:06:06.934 START TEST event_perf 00:06:06.934 ************************************ 00:06:06.934 16:04:49 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.934 Running I/O for 1 seconds...[2024-07-15 16:04:49.789651] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:06.934 [2024-07-15 16:04:49.789718] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192631 ] 00:06:06.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.935 [2024-07-15 16:04:49.847330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.193 [2024-07-15 16:04:49.931427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.193 [2024-07-15 16:04:49.931485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.193 [2024-07-15 16:04:49.931589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.193 [2024-07-15 16:04:49.931596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.126 Running I/O for 1 seconds... 00:06:08.126 lcore 0: 236074 00:06:08.126 lcore 1: 236073 00:06:08.126 lcore 2: 236072 00:06:08.126 lcore 3: 236074 00:06:08.126 done. 00:06:08.126 00:06:08.126 real 0m1.234s 00:06:08.126 user 0m4.148s 00:06:08.126 sys 0m0.080s 00:06:08.126 16:04:51 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.126 16:04:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.126 ************************************ 00:06:08.126 END TEST event_perf 00:06:08.126 ************************************ 00:06:08.126 16:04:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.126 16:04:51 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:08.126 16:04:51 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.126 16:04:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.126 ************************************ 00:06:08.126 START TEST event_reactor 00:06:08.126 ************************************ 00:06:08.126 16:04:51 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.126 [2024-07-15 16:04:51.072170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:08.126 [2024-07-15 16:04:51.072238] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192792 ] 00:06:08.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.385 [2024-07-15 16:04:51.133129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.385 [2024-07-15 16:04:51.224450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.758 test_start 00:06:09.758 oneshot 00:06:09.758 tick 100 00:06:09.758 tick 100 00:06:09.758 tick 250 00:06:09.758 tick 100 00:06:09.758 tick 100 00:06:09.758 tick 250 00:06:09.758 tick 100 00:06:09.758 tick 500 00:06:09.758 tick 100 00:06:09.758 tick 100 00:06:09.758 tick 250 00:06:09.758 tick 100 00:06:09.758 tick 100 00:06:09.758 test_end 00:06:09.758 00:06:09.758 real 0m1.247s 00:06:09.758 user 0m1.160s 00:06:09.758 sys 0m0.082s 00:06:09.758 16:04:52 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.758 16:04:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.758 ************************************ 00:06:09.758 END TEST event_reactor 00:06:09.758 ************************************ 00:06:09.758 16:04:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.758 16:04:52 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:09.758 16:04:52 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.758 16:04:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.758 ************************************ 00:06:09.758 START TEST event_reactor_perf 00:06:09.758 ************************************ 00:06:09.758 16:04:52 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.758 [2024-07-15 16:04:52.366953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:09.758 [2024-07-15 16:04:52.367018] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193056 ] 00:06:09.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.758 [2024-07-15 16:04:52.425249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.758 [2024-07-15 16:04:52.514872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.691 test_start 00:06:10.691 test_end 00:06:10.691 Performance: 440840 events per second 00:06:10.691 00:06:10.691 real 0m1.234s 00:06:10.691 user 0m1.157s 00:06:10.691 sys 0m0.072s 00:06:10.691 16:04:53 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.691 16:04:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.691 ************************************ 00:06:10.691 END TEST event_reactor_perf 00:06:10.691 ************************************ 00:06:10.691 16:04:53 event -- event/event.sh@49 -- # uname -s 00:06:10.691 16:04:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.691 16:04:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.691 16:04:53 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.691 16:04:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.691 16:04:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.691 ************************************ 00:06:10.691 START TEST event_scheduler 00:06:10.691 ************************************ 00:06:10.691 16:04:53 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:10.948 * Looking for test storage... 00:06:10.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:10.948 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.948 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=193252 00:06:10.948 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.948 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.948 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 193252 00:06:10.948 16:04:53 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 193252 ']' 00:06:10.948 16:04:53 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.948 16:04:53 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.948 16:04:53 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.949 16:04:53 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.949 16:04:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.949 [2024-07-15 16:04:53.733895] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:10.949 [2024-07-15 16:04:53.733987] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193252 ] 00:06:10.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.949 [2024-07-15 16:04:53.792953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.949 [2024-07-15 16:04:53.880300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.949 [2024-07-15 16:04:53.880355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.949 [2024-07-15 16:04:53.880420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.949 [2024-07-15 16:04:53.880423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:11.207 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 POWER: Env isn't set yet! 00:06:11.207 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:11.207 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:11.207 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:11.207 POWER: Cannot get available frequencies of lcore 0 00:06:11.207 POWER: Attempting to initialise PSTAT power management... 00:06:11.207 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:11.207 POWER: Initialized successfully for lcore 0 power management 00:06:11.207 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:11.207 POWER: Initialized successfully for lcore 1 power management 00:06:11.207 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:11.207 POWER: Initialized successfully for lcore 2 power management 00:06:11.207 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:11.207 POWER: Initialized successfully for lcore 3 power management 00:06:11.207 [2024-07-15 16:04:53.990944] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.207 [2024-07-15 16:04:53.990961] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.207 [2024-07-15 16:04:53.990971] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 [2024-07-15 16:04:54.092814] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.207 16:04:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.207 16:04:54 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.207 16:04:54 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 ************************************ 00:06:11.207 START TEST scheduler_create_thread 00:06:11.207 ************************************ 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 2 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 3 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 4 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 5 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 6 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.207 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 7 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 8 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 9 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 10 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.465 16:04:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.838 16:04:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.838 16:04:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:12.838 16:04:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:12.838 16:04:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.838 16:04:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.771 16:04:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.771 00:06:13.771 real 0m2.619s 00:06:13.771 user 0m0.013s 00:06:13.771 sys 0m0.003s 00:06:13.771 16:04:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.771 16:04:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.771 ************************************ 00:06:13.771 END TEST scheduler_create_thread 00:06:13.771 ************************************ 00:06:14.029 16:04:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.029 16:04:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 193252 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 193252 ']' 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 193252 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 193252 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 193252' 00:06:14.029 killing process with pid 193252 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 193252 00:06:14.029 16:04:56 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 193252 00:06:14.287 [2024-07-15 16:04:57.223831] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.545 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:14.545 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:14.545 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:14.545 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:14.545 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:14.545 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:14.545 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:14.545 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:14.545 00:06:14.545 real 0m3.824s 00:06:14.545 user 0m5.851s 00:06:14.545 sys 0m0.326s 00:06:14.545 16:04:57 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.545 16:04:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 ************************************ 00:06:14.545 END TEST event_scheduler 00:06:14.545 ************************************ 00:06:14.545 16:04:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.545 16:04:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.545 16:04:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.545 16:04:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.545 16:04:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 ************************************ 00:06:14.545 START TEST app_repeat 00:06:14.545 ************************************ 00:06:14.545 16:04:57 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.545 16:04:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=193699 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 193699' 00:06:14.804 Process app_repeat pid: 193699 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.804 spdk_app_start Round 0 00:06:14.804 16:04:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193699 /var/tmp/spdk-nbd.sock 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 193699 ']' 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.804 16:04:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.804 [2024-07-15 16:04:57.545334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:14.804 [2024-07-15 16:04:57.545403] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193699 ] 00:06:14.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.804 [2024-07-15 16:04:57.608401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.804 [2024-07-15 16:04:57.699629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.804 [2024-07-15 16:04:57.699635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.061 16:04:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.061 16:04:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:15.061 16:04:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.320 Malloc0 00:06:15.320 16:04:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.578 Malloc1 00:06:15.578 16:04:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.578 16:04:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.836 /dev/nbd0 00:06:15.836 16:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.836 16:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.836 1+0 records in 00:06:15.836 1+0 records out 00:06:15.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186282 s, 22.0 MB/s 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:15.836 16:04:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:15.836 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.836 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.836 16:04:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.094 /dev/nbd1 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.094 1+0 records in 00:06:16.094 1+0 records out 00:06:16.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183221 s, 22.4 MB/s 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:16.094 16:04:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.094 16:04:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.355 { 00:06:16.355 "nbd_device": "/dev/nbd0", 00:06:16.355 "bdev_name": "Malloc0" 00:06:16.355 }, 00:06:16.355 { 00:06:16.355 "nbd_device": "/dev/nbd1", 00:06:16.355 "bdev_name": "Malloc1" 00:06:16.355 } 00:06:16.355 ]' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.355 { 00:06:16.355 "nbd_device": "/dev/nbd0", 00:06:16.355 "bdev_name": "Malloc0" 00:06:16.355 }, 00:06:16.355 { 00:06:16.355 "nbd_device": "/dev/nbd1", 00:06:16.355 "bdev_name": "Malloc1" 00:06:16.355 } 00:06:16.355 ]' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.355 /dev/nbd1' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.355 /dev/nbd1' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.355 256+0 records in 00:06:16.355 256+0 records out 00:06:16.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049765 s, 211 MB/s 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.355 256+0 records in 00:06:16.355 256+0 records out 00:06:16.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235017 s, 44.6 MB/s 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.355 256+0 records in 00:06:16.355 256+0 records out 00:06:16.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288166 s, 36.4 MB/s 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.355 16:04:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.614 16:04:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.873 16:04:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.131 16:05:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.131 16:05:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.131 16:05:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.390 16:05:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.390 16:05:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.649 16:05:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.649 [2024-07-15 16:05:00.622889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.908 [2024-07-15 16:05:00.714450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.908 [2024-07-15 16:05:00.714452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.908 [2024-07-15 16:05:00.777067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.908 [2024-07-15 16:05:00.777176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.456 16:05:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.456 16:05:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.456 spdk_app_start Round 1 00:06:20.456 16:05:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193699 /var/tmp/spdk-nbd.sock 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 193699 ']' 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.456 16:05:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.714 16:05:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.714 16:05:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:20.714 16:05:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.971 Malloc0 00:06:20.971 16:05:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.229 Malloc1 00:06:21.229 16:05:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.229 16:05:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.486 /dev/nbd0 00:06:21.486 16:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.486 16:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.487 1+0 records in 00:06:21.487 1+0 records out 00:06:21.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202131 s, 20.3 MB/s 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:21.487 16:05:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:21.487 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.487 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.487 16:05:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.746 /dev/nbd1 00:06:21.746 16:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.746 16:05:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:21.746 16:05:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.005 1+0 records in 00:06:22.005 1+0 records out 00:06:22.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182465 s, 22.4 MB/s 00:06:22.006 16:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.006 16:05:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:22.006 16:05:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.006 16:05:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:22.006 16:05:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.006 { 00:06:22.006 "nbd_device": "/dev/nbd0", 00:06:22.006 "bdev_name": "Malloc0" 00:06:22.006 }, 00:06:22.006 { 00:06:22.006 "nbd_device": "/dev/nbd1", 00:06:22.006 "bdev_name": "Malloc1" 00:06:22.006 } 00:06:22.006 ]' 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.006 16:05:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.006 { 00:06:22.006 "nbd_device": "/dev/nbd0", 00:06:22.006 "bdev_name": "Malloc0" 00:06:22.006 }, 00:06:22.006 { 00:06:22.006 "nbd_device": "/dev/nbd1", 00:06:22.006 "bdev_name": "Malloc1" 00:06:22.006 } 00:06:22.006 ]' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.264 /dev/nbd1' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.264 /dev/nbd1' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.264 256+0 records in 00:06:22.264 256+0 records out 00:06:22.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408965 s, 256 MB/s 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.264 256+0 records in 00:06:22.264 256+0 records out 00:06:22.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272453 s, 38.5 MB/s 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.264 256+0 records in 00:06:22.264 256+0 records out 00:06:22.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255261 s, 41.1 MB/s 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.264 16:05:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.522 16:05:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.779 16:05:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.037 16:05:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.037 16:05:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.295 16:05:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.555 [2024-07-15 16:05:06.421604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.555 [2024-07-15 16:05:06.512230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.555 [2024-07-15 16:05:06.512236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.814 [2024-07-15 16:05:06.576044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.814 [2024-07-15 16:05:06.576126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.417 16:05:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.417 16:05:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:26.417 spdk_app_start Round 2 00:06:26.417 16:05:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 193699 /var/tmp/spdk-nbd.sock 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 193699 ']' 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.417 16:05:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.674 16:05:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.674 16:05:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:26.674 16:05:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.931 Malloc0 00:06:26.931 16:05:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.189 Malloc1 00:06:27.189 16:05:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.189 16:05:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.447 /dev/nbd0 00:06:27.447 16:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.447 16:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.447 1+0 records in 00:06:27.447 1+0 records out 00:06:27.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001505 s, 27.2 MB/s 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:27.447 16:05:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:27.447 16:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.447 16:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.447 16:05:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.705 /dev/nbd1 00:06:27.705 16:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.705 16:05:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:27.705 16:05:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.706 1+0 records in 00:06:27.706 1+0 records out 00:06:27.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202793 s, 20.2 MB/s 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:27.706 16:05:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:27.706 16:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.706 16:05:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.706 16:05:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.706 16:05:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.706 16:05:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.964 { 00:06:27.964 "nbd_device": "/dev/nbd0", 00:06:27.964 "bdev_name": "Malloc0" 00:06:27.964 }, 00:06:27.964 { 00:06:27.964 "nbd_device": "/dev/nbd1", 00:06:27.964 "bdev_name": "Malloc1" 00:06:27.964 } 00:06:27.964 ]' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.964 { 00:06:27.964 "nbd_device": "/dev/nbd0", 00:06:27.964 "bdev_name": "Malloc0" 00:06:27.964 }, 00:06:27.964 { 00:06:27.964 "nbd_device": "/dev/nbd1", 00:06:27.964 "bdev_name": "Malloc1" 00:06:27.964 } 00:06:27.964 ]' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.964 /dev/nbd1' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.964 /dev/nbd1' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.964 256+0 records in 00:06:27.964 256+0 records out 00:06:27.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474372 s, 221 MB/s 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.964 256+0 records in 00:06:27.964 256+0 records out 00:06:27.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209177 s, 50.1 MB/s 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.964 256+0 records in 00:06:27.964 256+0 records out 00:06:27.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278405 s, 37.7 MB/s 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.964 16:05:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.221 16:05:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.479 16:05:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.736 16:05:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.736 16:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.737 16:05:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.737 16:05:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.995 16:05:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.253 [2024-07-15 16:05:12.194322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.511 [2024-07-15 16:05:12.285289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.511 [2024-07-15 16:05:12.285294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.511 [2024-07-15 16:05:12.348152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.511 [2024-07-15 16:05:12.348235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.045 16:05:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 193699 /var/tmp/spdk-nbd.sock 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 193699 ']' 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.045 16:05:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:32.303 16:05:15 event.app_repeat -- event/event.sh@39 -- # killprocess 193699 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 193699 ']' 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 193699 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.303 16:05:15 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 193699 00:06:32.304 16:05:15 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.304 16:05:15 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.304 16:05:15 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 193699' 00:06:32.304 killing process with pid 193699 00:06:32.304 16:05:15 event.app_repeat -- common/autotest_common.sh@965 -- # kill 193699 00:06:32.304 16:05:15 event.app_repeat -- common/autotest_common.sh@970 -- # wait 193699 00:06:32.562 spdk_app_start is called in Round 0. 00:06:32.562 Shutdown signal received, stop current app iteration 00:06:32.562 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:32.562 spdk_app_start is called in Round 1. 00:06:32.562 Shutdown signal received, stop current app iteration 00:06:32.562 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:32.562 spdk_app_start is called in Round 2. 00:06:32.562 Shutdown signal received, stop current app iteration 00:06:32.562 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:32.562 spdk_app_start is called in Round 3. 00:06:32.562 Shutdown signal received, stop current app iteration 00:06:32.562 16:05:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.562 16:05:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:32.562 00:06:32.562 real 0m17.927s 00:06:32.562 user 0m38.985s 00:06:32.562 sys 0m3.211s 00:06:32.562 16:05:15 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.562 16:05:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.562 ************************************ 00:06:32.562 END TEST app_repeat 00:06:32.562 ************************************ 00:06:32.562 16:05:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.562 16:05:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.562 16:05:15 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.562 16:05:15 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.562 16:05:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.562 ************************************ 00:06:32.562 START TEST cpu_locks 00:06:32.562 ************************************ 00:06:32.562 16:05:15 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.820 * Looking for test storage... 00:06:32.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:32.820 16:05:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:32.820 16:05:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:32.820 16:05:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:32.820 16:05:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:32.820 16:05:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.820 16:05:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.820 16:05:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.820 ************************************ 00:06:32.820 START TEST default_locks 00:06:32.820 ************************************ 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=196054 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 196054 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 196054 ']' 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.820 16:05:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.820 [2024-07-15 16:05:15.624036] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.820 [2024-07-15 16:05:15.624127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196054 ] 00:06:32.820 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.820 [2024-07-15 16:05:15.686940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.820 [2024-07-15 16:05:15.778318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.078 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.078 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:33.078 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 196054 00:06:33.078 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 196054 00:06:33.078 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.643 lslocks: write error 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 196054 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 196054 ']' 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 196054 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196054 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196054' 00:06:33.643 killing process with pid 196054 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 196054 00:06:33.643 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 196054 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 196054 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 196054 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 196054 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 196054 ']' 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (196054) - No such process 00:06:34.212 ERROR: process (pid: 196054) is no longer running 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.212 00:06:34.212 real 0m1.324s 00:06:34.212 user 0m1.257s 00:06:34.212 sys 0m0.556s 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.212 16:05:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 ************************************ 00:06:34.212 END TEST default_locks 00:06:34.212 ************************************ 00:06:34.212 16:05:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.212 16:05:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.212 16:05:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.212 16:05:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 ************************************ 00:06:34.212 START TEST default_locks_via_rpc 00:06:34.212 ************************************ 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=196224 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 196224 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 196224 ']' 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.212 16:05:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 [2024-07-15 16:05:17.003415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.212 [2024-07-15 16:05:17.003508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196224 ] 00:06:34.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.212 [2024-07-15 16:05:17.068951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.212 [2024-07-15 16:05:17.157239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 196224 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 196224 00:06:34.472 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 196224 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 196224 ']' 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 196224 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196224 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196224' 00:06:35.039 killing process with pid 196224 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 196224 00:06:35.039 16:05:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 196224 00:06:35.297 00:06:35.297 real 0m1.236s 00:06:35.297 user 0m1.166s 00:06:35.297 sys 0m0.553s 00:06:35.297 16:05:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.297 16:05:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 ************************************ 00:06:35.297 END TEST default_locks_via_rpc 00:06:35.297 ************************************ 00:06:35.297 16:05:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.297 16:05:18 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.297 16:05:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.297 16:05:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 ************************************ 00:06:35.297 START TEST non_locking_app_on_locked_coremask 00:06:35.297 ************************************ 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=196503 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 196503 /var/tmp/spdk.sock 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 196503 ']' 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.298 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.557 [2024-07-15 16:05:18.282674] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.557 [2024-07-15 16:05:18.282771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196503 ] 00:06:35.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.557 [2024-07-15 16:05:18.340712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.557 [2024-07-15 16:05:18.430806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=196507 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 196507 /var/tmp/spdk2.sock 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 196507 ']' 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.816 16:05:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.816 [2024-07-15 16:05:18.738277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.816 [2024-07-15 16:05:18.738372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196507 ] 00:06:35.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.076 [2024-07-15 16:05:18.836794] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.076 [2024-07-15 16:05:18.836835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.076 [2024-07-15 16:05:19.021802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.008 16:05:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.008 16:05:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:37.008 16:05:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 196503 00:06:37.008 16:05:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 196503 00:06:37.008 16:05:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.267 lslocks: write error 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 196503 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 196503 ']' 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 196503 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196503 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196503' 00:06:37.267 killing process with pid 196503 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 196503 00:06:37.267 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 196503 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 196507 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 196507 ']' 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 196507 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196507 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196507' 00:06:38.202 killing process with pid 196507 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 196507 00:06:38.202 16:05:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 196507 00:06:38.460 00:06:38.460 real 0m3.124s 00:06:38.460 user 0m3.242s 00:06:38.460 sys 0m1.055s 00:06:38.460 16:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.460 16:05:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.460 ************************************ 00:06:38.460 END TEST non_locking_app_on_locked_coremask 00:06:38.460 ************************************ 00:06:38.460 16:05:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.460 16:05:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.460 16:05:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.460 16:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.460 ************************************ 00:06:38.460 START TEST locking_app_on_unlocked_coremask 00:06:38.460 ************************************ 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=196818 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 196818 /var/tmp/spdk.sock 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 196818 ']' 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.460 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.719 [2024-07-15 16:05:21.458954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.719 [2024-07-15 16:05:21.459041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196818 ] 00:06:38.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.719 [2024-07-15 16:05:21.523539] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.719 [2024-07-15 16:05:21.523577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.720 [2024-07-15 16:05:21.616472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=196947 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 196947 /var/tmp/spdk2.sock 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 196947 ']' 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.977 16:05:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.977 [2024-07-15 16:05:21.925251] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.977 [2024-07-15 16:05:21.925347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196947 ] 00:06:38.977 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.235 [2024-07-15 16:05:22.023689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.235 [2024-07-15 16:05:22.208981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.168 16:05:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.168 16:05:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.168 16:05:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 196947 00:06:40.168 16:05:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 196947 00:06:40.168 16:05:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.428 lslocks: write error 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 196818 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 196818 ']' 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 196818 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196818 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196818' 00:06:40.428 killing process with pid 196818 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 196818 00:06:40.428 16:05:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 196818 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 196947 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 196947 ']' 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 196947 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196947 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.366 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196947' 00:06:41.367 killing process with pid 196947 00:06:41.367 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 196947 00:06:41.367 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 196947 00:06:41.625 00:06:41.625 real 0m3.047s 00:06:41.625 user 0m3.200s 00:06:41.625 sys 0m1.012s 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.625 ************************************ 00:06:41.625 END TEST locking_app_on_unlocked_coremask 00:06:41.625 ************************************ 00:06:41.625 16:05:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:41.625 16:05:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.625 16:05:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.625 16:05:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.625 ************************************ 00:06:41.625 START TEST locking_app_on_locked_coremask 00:06:41.625 ************************************ 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=197247 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 197247 /var/tmp/spdk.sock 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 197247 ']' 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.625 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.625 [2024-07-15 16:05:24.560796] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.626 [2024-07-15 16:05:24.560889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197247 ] 00:06:41.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.884 [2024-07-15 16:05:24.625080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.884 [2024-07-15 16:05:24.718196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=197380 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 197380 /var/tmp/spdk2.sock 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 197380 /var/tmp/spdk2.sock 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 197380 /var/tmp/spdk2.sock 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 197380 ']' 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.142 16:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.142 [2024-07-15 16:05:25.026809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.142 [2024-07-15 16:05:25.026882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197380 ] 00:06:42.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.142 [2024-07-15 16:05:25.118838] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 197247 has claimed it. 00:06:42.142 [2024-07-15 16:05:25.118894] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (197380) - No such process 00:06:43.079 ERROR: process (pid: 197380) is no longer running 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 197247 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 197247 00:06:43.079 16:05:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.079 lslocks: write error 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 197247 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 197247 ']' 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 197247 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 197247 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 197247' 00:06:43.079 killing process with pid 197247 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 197247 00:06:43.079 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 197247 00:06:43.650 00:06:43.650 real 0m1.928s 00:06:43.650 user 0m2.077s 00:06:43.650 sys 0m0.634s 00:06:43.650 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.650 16:05:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.650 ************************************ 00:06:43.650 END TEST locking_app_on_locked_coremask 00:06:43.650 ************************************ 00:06:43.650 16:05:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.650 16:05:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.650 16:05:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.650 16:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.650 ************************************ 00:06:43.650 START TEST locking_overlapped_coremask 00:06:43.650 ************************************ 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=197545 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 197545 /var/tmp/spdk.sock 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 197545 ']' 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.650 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.650 [2024-07-15 16:05:26.536046] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:43.650 [2024-07-15 16:05:26.536131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197545 ] 00:06:43.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.650 [2024-07-15 16:05:26.593336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.909 [2024-07-15 16:05:26.682170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.909 [2024-07-15 16:05:26.682228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.909 [2024-07-15 16:05:26.682231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.167 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=197560 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 197560 /var/tmp/spdk2.sock 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 197560 /var/tmp/spdk2.sock 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 197560 /var/tmp/spdk2.sock 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 197560 ']' 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.168 16:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.168 [2024-07-15 16:05:26.983095] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.168 [2024-07-15 16:05:26.983189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197560 ] 00:06:44.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.168 [2024-07-15 16:05:27.073078] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 197545 has claimed it. 00:06:44.168 [2024-07-15 16:05:27.073133] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (197560) - No such process 00:06:44.784 ERROR: process (pid: 197560) is no longer running 00:06:44.784 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.784 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 197545 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 197545 ']' 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 197545 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 197545 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 197545' 00:06:44.785 killing process with pid 197545 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 197545 00:06:44.785 16:05:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 197545 00:06:45.352 00:06:45.352 real 0m1.622s 00:06:45.352 user 0m4.439s 00:06:45.352 sys 0m0.445s 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.352 ************************************ 00:06:45.352 END TEST locking_overlapped_coremask 00:06:45.352 ************************************ 00:06:45.352 16:05:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.352 16:05:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.352 16:05:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.352 16:05:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.352 ************************************ 00:06:45.352 START TEST locking_overlapped_coremask_via_rpc 00:06:45.352 ************************************ 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=197725 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 197725 /var/tmp/spdk.sock 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 197725 ']' 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.352 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.352 [2024-07-15 16:05:28.203690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.352 [2024-07-15 16:05:28.203805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197725 ] 00:06:45.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.352 [2024-07-15 16:05:28.271390] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.352 [2024-07-15 16:05:28.271432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.612 [2024-07-15 16:05:28.369113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.612 [2024-07-15 16:05:28.369194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.612 [2024-07-15 16:05:28.369176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=197850 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 197850 /var/tmp/spdk2.sock 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 197850 ']' 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.872 16:05:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.872 [2024-07-15 16:05:28.659065] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:45.872 [2024-07-15 16:05:28.659156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197850 ] 00:06:45.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.872 [2024-07-15 16:05:28.747298] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.872 [2024-07-15 16:05:28.747331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.132 [2024-07-15 16:05:28.923458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.132 [2024-07-15 16:05:28.926797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.132 [2024-07-15 16:05:28.926799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.698 [2024-07-15 16:05:29.619840] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 197725 has claimed it. 00:06:46.698 request: 00:06:46.698 { 00:06:46.698 "method": "framework_enable_cpumask_locks", 00:06:46.698 "req_id": 1 00:06:46.698 } 00:06:46.698 Got JSON-RPC error response 00:06:46.698 response: 00:06:46.698 { 00:06:46.698 "code": -32603, 00:06:46.698 "message": "Failed to claim CPU core: 2" 00:06:46.698 } 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 197725 /var/tmp/spdk.sock 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 197725 ']' 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.698 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 197850 /var/tmp/spdk2.sock 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 197850 ']' 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.957 16:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.216 00:06:47.216 real 0m1.966s 00:06:47.216 user 0m1.027s 00:06:47.216 sys 0m0.188s 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.216 16:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.216 ************************************ 00:06:47.216 END TEST locking_overlapped_coremask_via_rpc 00:06:47.216 ************************************ 00:06:47.216 16:05:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.216 16:05:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 197725 ]] 00:06:47.216 16:05:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 197725 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 197725 ']' 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 197725 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 197725 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 197725' 00:06:47.216 killing process with pid 197725 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 197725 00:06:47.216 16:05:30 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 197725 00:06:47.784 16:05:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 197850 ]] 00:06:47.784 16:05:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 197850 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 197850 ']' 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 197850 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 197850 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 197850' 00:06:47.784 killing process with pid 197850 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 197850 00:06:47.784 16:05:30 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 197850 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 197725 ]] 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 197725 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 197725 ']' 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 197725 00:06:48.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (197725) - No such process 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 197725 is not found' 00:06:48.043 Process with pid 197725 is not found 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 197850 ]] 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 197850 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 197850 ']' 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 197850 00:06:48.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (197850) - No such process 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 197850 is not found' 00:06:48.043 Process with pid 197850 is not found 00:06:48.043 16:05:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.043 00:06:48.043 real 0m15.490s 00:06:48.043 user 0m27.038s 00:06:48.043 sys 0m5.345s 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.043 16:05:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.043 ************************************ 00:06:48.043 END TEST cpu_locks 00:06:48.043 ************************************ 00:06:48.043 00:06:48.043 real 0m41.316s 00:06:48.043 user 1m18.489s 00:06:48.043 sys 0m9.353s 00:06:48.043 16:05:31 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.043 16:05:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.043 ************************************ 00:06:48.043 END TEST event 00:06:48.043 ************************************ 00:06:48.303 16:05:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.303 16:05:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.303 16:05:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.303 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:48.303 ************************************ 00:06:48.303 START TEST thread 00:06:48.303 ************************************ 00:06:48.303 16:05:31 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:48.303 * Looking for test storage... 00:06:48.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:48.303 16:05:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.303 16:05:31 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:48.303 16:05:31 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.303 16:05:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.303 ************************************ 00:06:48.303 START TEST thread_poller_perf 00:06:48.303 ************************************ 00:06:48.303 16:05:31 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.303 [2024-07-15 16:05:31.142158] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:48.303 [2024-07-15 16:05:31.142215] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198219 ] 00:06:48.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.303 [2024-07-15 16:05:31.200114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.561 [2024-07-15 16:05:31.289864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.562 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.499 ====================================== 00:06:49.499 busy:2715421150 (cyc) 00:06:49.499 total_run_count: 296000 00:06:49.499 tsc_hz: 2700000000 (cyc) 00:06:49.499 ====================================== 00:06:49.499 poller_cost: 9173 (cyc), 3397 (nsec) 00:06:49.499 00:06:49.499 real 0m1.251s 00:06:49.499 user 0m1.165s 00:06:49.499 sys 0m0.081s 00:06:49.499 16:05:32 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.499 16:05:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.499 ************************************ 00:06:49.499 END TEST thread_poller_perf 00:06:49.499 ************************************ 00:06:49.499 16:05:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.499 16:05:32 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:49.499 16:05:32 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.499 16:05:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.499 ************************************ 00:06:49.499 START TEST thread_poller_perf 00:06:49.499 ************************************ 00:06:49.499 16:05:32 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.499 [2024-07-15 16:05:32.449122] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:49.499 [2024-07-15 16:05:32.449183] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198373 ] 00:06:49.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.757 [2024-07-15 16:05:32.514361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.757 [2024-07-15 16:05:32.605772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.757 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.132 ====================================== 00:06:51.132 busy:2702880537 (cyc) 00:06:51.132 total_run_count: 3853000 00:06:51.132 tsc_hz: 2700000000 (cyc) 00:06:51.132 ====================================== 00:06:51.132 poller_cost: 701 (cyc), 259 (nsec) 00:06:51.132 00:06:51.132 real 0m1.256s 00:06:51.132 user 0m1.159s 00:06:51.132 sys 0m0.091s 00:06:51.132 16:05:33 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.132 16:05:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.132 ************************************ 00:06:51.132 END TEST thread_poller_perf 00:06:51.132 ************************************ 00:06:51.132 16:05:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.132 00:06:51.132 real 0m2.663s 00:06:51.132 user 0m2.388s 00:06:51.132 sys 0m0.275s 00:06:51.132 16:05:33 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.132 16:05:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.132 ************************************ 00:06:51.132 END TEST thread 00:06:51.132 ************************************ 00:06:51.132 16:05:33 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.132 16:05:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.132 16:05:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.132 16:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:51.132 ************************************ 00:06:51.132 START TEST accel 00:06:51.132 ************************************ 00:06:51.132 16:05:33 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:51.132 * Looking for test storage... 00:06:51.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:51.132 16:05:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:51.132 16:05:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:51.132 16:05:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.132 16:05:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=198566 00:06:51.132 16:05:33 accel -- accel/accel.sh@63 -- # waitforlisten 198566 00:06:51.132 16:05:33 accel -- common/autotest_common.sh@827 -- # '[' -z 198566 ']' 00:06:51.132 16:05:33 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.132 16:05:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:51.132 16:05:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:51.132 16:05:33 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.132 16:05:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.133 16:05:33 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.133 16:05:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.133 16:05:33 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.133 16:05:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.133 16:05:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.133 16:05:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.133 16:05:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.133 16:05:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.133 16:05:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.133 [2024-07-15 16:05:33.867464] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:51.133 [2024-07-15 16:05:33.867544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198566 ] 00:06:51.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.133 [2024-07-15 16:05:33.930235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.133 [2024-07-15 16:05:34.018816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@860 -- # return 0 00:06:51.392 16:05:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:51.392 16:05:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:51.392 16:05:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:51.392 16:05:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:51.392 16:05:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:51.392 16:05:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.392 16:05:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # IFS== 00:06:51.392 16:05:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:51.392 16:05:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:51.392 16:05:34 accel -- accel/accel.sh@75 -- # killprocess 198566 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@946 -- # '[' -z 198566 ']' 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@950 -- # kill -0 198566 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@951 -- # uname 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 198566 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 198566' 00:06:51.392 killing process with pid 198566 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@965 -- # kill 198566 00:06:51.392 16:05:34 accel -- common/autotest_common.sh@970 -- # wait 198566 00:06:51.960 16:05:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:51.960 16:05:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.960 16:05:34 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:51.960 16:05:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:51.960 16:05:34 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.960 16:05:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:51.960 16:05:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.960 16:05:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.960 ************************************ 00:06:51.960 START TEST accel_missing_filename 00:06:51.960 ************************************ 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.960 16:05:34 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:51.960 16:05:34 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:51.960 [2024-07-15 16:05:34.903470] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:51.960 [2024-07-15 16:05:34.903524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198732 ] 00:06:51.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.218 [2024-07-15 16:05:34.964827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.218 [2024-07-15 16:05:35.061423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.218 [2024-07-15 16:05:35.124830] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.478 [2024-07-15 16:05:35.207573] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:52.478 A filename is required. 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.478 00:06:52.478 real 0m0.401s 00:06:52.478 user 0m0.285s 00:06:52.478 sys 0m0.145s 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.478 16:05:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:52.478 ************************************ 00:06:52.478 END TEST accel_missing_filename 00:06:52.478 ************************************ 00:06:52.478 16:05:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.478 16:05:35 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:52.478 16:05:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.478 16:05:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.478 ************************************ 00:06:52.478 START TEST accel_compress_verify 00:06:52.478 ************************************ 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.478 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:52.478 16:05:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:52.478 [2024-07-15 16:05:35.353335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.478 [2024-07-15 16:05:35.353398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198885 ] 00:06:52.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.478 [2024-07-15 16:05:35.415342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.738 [2024-07-15 16:05:35.509976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.738 [2024-07-15 16:05:35.572959] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.738 [2024-07-15 16:05:35.661502] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:52.999 00:06:52.999 Compression does not support the verify option, aborting. 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.999 00:06:52.999 real 0m0.411s 00:06:52.999 user 0m0.292s 00:06:52.999 sys 0m0.152s 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.999 16:05:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:52.999 ************************************ 00:06:52.999 END TEST accel_compress_verify 00:06:52.999 ************************************ 00:06:52.999 16:05:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.999 ************************************ 00:06:52.999 START TEST accel_wrong_workload 00:06:52.999 ************************************ 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:52.999 16:05:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:52.999 Unsupported workload type: foobar 00:06:52.999 [2024-07-15 16:05:35.806701] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.999 accel_perf options: 00:06:52.999 [-h help message] 00:06:52.999 [-q queue depth per core] 00:06:52.999 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.999 [-T number of threads per core 00:06:52.999 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.999 [-t time in seconds] 00:06:52.999 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.999 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:52.999 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.999 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.999 [-S for crc32c workload, use this seed value (default 0) 00:06:52.999 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.999 [-f for fill workload, use this BYTE value (default 255) 00:06:52.999 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.999 [-y verify result if this switch is on] 00:06:52.999 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.999 Can be used to spread operations across a wider range of memory. 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.999 00:06:52.999 real 0m0.020s 00:06:52.999 user 0m0.013s 00:06:52.999 sys 0m0.007s 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.999 16:05:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:52.999 ************************************ 00:06:52.999 END TEST accel_wrong_workload 00:06:52.999 ************************************ 00:06:52.999 Error: writing output failed: Broken pipe 00:06:52.999 16:05:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.999 16:05:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.999 ************************************ 00:06:52.999 START TEST accel_negative_buffers 00:06:52.999 ************************************ 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:52.999 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:53.000 16:05:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:53.000 -x option must be non-negative. 00:06:53.000 [2024-07-15 16:05:35.879701] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:53.000 accel_perf options: 00:06:53.000 [-h help message] 00:06:53.000 [-q queue depth per core] 00:06:53.000 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.000 [-T number of threads per core 00:06:53.000 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.000 [-t time in seconds] 00:06:53.000 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.000 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:53.000 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.000 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.000 [-S for crc32c workload, use this seed value (default 0) 00:06:53.000 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.000 [-f for fill workload, use this BYTE value (default 255) 00:06:53.000 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.000 [-y verify result if this switch is on] 00:06:53.000 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.000 Can be used to spread operations across a wider range of memory. 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.000 00:06:53.000 real 0m0.024s 00:06:53.000 user 0m0.015s 00:06:53.000 sys 0m0.009s 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.000 16:05:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:53.000 ************************************ 00:06:53.000 END TEST accel_negative_buffers 00:06:53.000 ************************************ 00:06:53.000 Error: writing output failed: Broken pipe 00:06:53.000 16:05:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:53.000 16:05:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.000 16:05:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.000 16:05:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.000 ************************************ 00:06:53.000 START TEST accel_crc32c 00:06:53.000 ************************************ 00:06:53.000 16:05:35 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:53.000 16:05:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:53.000 [2024-07-15 16:05:35.943258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:53.000 [2024-07-15 16:05:35.943326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198945 ] 00:06:53.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.259 [2024-07-15 16:05:36.006657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.259 [2024-07-15 16:05:36.101371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.259 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.260 16:05:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:54.638 16:05:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.638 00:06:54.638 real 0m1.404s 00:06:54.638 user 0m1.269s 00:06:54.638 sys 0m0.138s 00:06:54.638 16:05:37 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.638 16:05:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:54.638 ************************************ 00:06:54.638 END TEST accel_crc32c 00:06:54.638 ************************************ 00:06:54.638 16:05:37 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:54.638 16:05:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:54.638 16:05:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.638 16:05:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.638 ************************************ 00:06:54.638 START TEST accel_crc32c_C2 00:06:54.638 ************************************ 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:54.638 [2024-07-15 16:05:37.393070] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:54.638 [2024-07-15 16:05:37.393137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199222 ] 00:06:54.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.638 [2024-07-15 16:05:37.456483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.638 [2024-07-15 16:05:37.549643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.638 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.639 16:05:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.021 00:06:56.021 real 0m1.395s 00:06:56.021 user 0m1.257s 00:06:56.021 sys 0m0.141s 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.021 16:05:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:56.021 ************************************ 00:06:56.021 END TEST accel_crc32c_C2 00:06:56.021 ************************************ 00:06:56.021 16:05:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.021 16:05:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:56.021 16:05:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.021 16:05:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.021 ************************************ 00:06:56.021 START TEST accel_copy 00:06:56.022 ************************************ 00:06:56.022 16:05:38 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:56.022 16:05:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:56.022 [2024-07-15 16:05:38.838799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:56.022 [2024-07-15 16:05:38.838862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199380 ] 00:06:56.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.022 [2024-07-15 16:05:38.903017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.022 [2024-07-15 16:05:38.995659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.282 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.283 16:05:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:57.658 16:05:40 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.658 00:06:57.658 real 0m1.411s 00:06:57.658 user 0m1.258s 00:06:57.658 sys 0m0.155s 00:06:57.658 16:05:40 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.658 16:05:40 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.658 ************************************ 00:06:57.658 END TEST accel_copy 00:06:57.658 ************************************ 00:06:57.658 16:05:40 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.658 16:05:40 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:57.658 16:05:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.658 16:05:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.658 ************************************ 00:06:57.658 START TEST accel_fill 00:06:57.658 ************************************ 00:06:57.658 16:05:40 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:57.658 [2024-07-15 16:05:40.293877] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:57.658 [2024-07-15 16:05:40.293934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199537 ] 00:06:57.658 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.658 [2024-07-15 16:05:40.357268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.658 [2024-07-15 16:05:40.452072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.658 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.659 16:05:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:59.038 16:05:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.038 00:06:59.038 real 0m1.412s 00:06:59.038 user 0m1.261s 00:06:59.038 sys 0m0.154s 00:06:59.038 16:05:41 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.038 16:05:41 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:59.038 ************************************ 00:06:59.038 END TEST accel_fill 00:06:59.038 ************************************ 00:06:59.038 16:05:41 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:59.038 16:05:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.038 16:05:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.038 16:05:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.038 ************************************ 00:06:59.038 START TEST accel_copy_crc32c 00:06:59.038 ************************************ 00:06:59.038 16:05:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:59.038 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.038 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.038 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.039 [2024-07-15 16:05:41.752367] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:59.039 [2024-07-15 16:05:41.752428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199690 ] 00:06:59.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.039 [2024-07-15 16:05:41.814784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.039 [2024-07-15 16:05:41.910202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.039 16:05:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.421 00:07:00.421 real 0m1.399s 00:07:00.421 user 0m1.254s 00:07:00.421 sys 0m0.147s 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.421 16:05:43 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.421 ************************************ 00:07:00.421 END TEST accel_copy_crc32c 00:07:00.421 ************************************ 00:07:00.421 16:05:43 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.421 16:05:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:00.421 16:05:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.421 16:05:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.421 ************************************ 00:07:00.421 START TEST accel_copy_crc32c_C2 00:07:00.421 ************************************ 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.421 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.421 [2024-07-15 16:05:43.192731] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:00.421 [2024-07-15 16:05:43.192887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199967 ] 00:07:00.421 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.421 [2024-07-15 16:05:43.255116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.421 [2024-07-15 16:05:43.345337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.680 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.681 16:05:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.615 00:07:01.615 real 0m1.402s 00:07:01.615 user 0m1.267s 00:07:01.615 sys 0m0.138s 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.615 16:05:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.615 ************************************ 00:07:01.615 END TEST accel_copy_crc32c_C2 00:07:01.615 ************************************ 00:07:01.874 16:05:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:01.874 16:05:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:01.874 16:05:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.874 16:05:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.874 ************************************ 00:07:01.874 START TEST accel_dualcast 00:07:01.874 ************************************ 00:07:01.874 16:05:44 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:01.874 16:05:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:01.874 [2024-07-15 16:05:44.640426] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.874 [2024-07-15 16:05:44.640490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200125 ] 00:07:01.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.874 [2024-07-15 16:05:44.702721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.874 [2024-07-15 16:05:44.796975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.134 16:05:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:03.070 16:05:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.070 00:07:03.070 real 0m1.415s 00:07:03.070 user 0m1.267s 00:07:03.070 sys 0m0.150s 00:07:03.070 16:05:46 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.070 16:05:46 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:03.070 ************************************ 00:07:03.070 END TEST accel_dualcast 00:07:03.070 ************************************ 00:07:03.326 16:05:46 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:03.326 16:05:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.326 16:05:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.326 16:05:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.326 ************************************ 00:07:03.326 START TEST accel_compare 00:07:03.326 ************************************ 00:07:03.326 16:05:46 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:03.326 16:05:46 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:03.326 [2024-07-15 16:05:46.100502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.326 [2024-07-15 16:05:46.100554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200276 ] 00:07:03.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.326 [2024-07-15 16:05:46.163669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.326 [2024-07-15 16:05:46.257453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:03.584 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.585 16:05:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:04.519 16:05:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.519 00:07:04.519 real 0m1.410s 00:07:04.519 user 0m1.271s 00:07:04.519 sys 0m0.141s 00:07:04.519 16:05:47 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.519 16:05:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:04.519 ************************************ 00:07:04.519 END TEST accel_compare 00:07:04.519 ************************************ 00:07:04.775 16:05:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:04.775 16:05:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:04.775 16:05:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.775 16:05:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 ************************************ 00:07:04.775 START TEST accel_xor 00:07:04.775 ************************************ 00:07:04.775 16:05:47 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:04.775 16:05:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:04.775 16:05:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:04.775 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:04.776 16:05:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:04.776 [2024-07-15 16:05:47.557108] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.776 [2024-07-15 16:05:47.557175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200550 ] 00:07:04.776 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.776 [2024-07-15 16:05:47.619403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.776 [2024-07-15 16:05:47.711629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.034 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.035 16:05:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.970 16:05:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.970 00:07:05.970 real 0m1.398s 00:07:05.970 user 0m1.261s 00:07:05.970 sys 0m0.140s 00:07:05.970 16:05:48 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.970 16:05:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.970 ************************************ 00:07:05.970 END TEST accel_xor 00:07:05.971 ************************************ 00:07:06.228 16:05:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:06.228 16:05:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:06.228 16:05:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.228 16:05:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 ************************************ 00:07:06.228 START TEST accel_xor 00:07:06.228 ************************************ 00:07:06.228 16:05:48 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:06.228 16:05:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:06.229 [2024-07-15 16:05:48.999274] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.229 [2024-07-15 16:05:48.999336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200710 ] 00:07:06.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.229 [2024-07-15 16:05:49.061789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.229 [2024-07-15 16:05:49.155878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:06.489 16:05:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:07.423 16:05:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.423 00:07:07.423 real 0m1.415s 00:07:07.423 user 0m1.271s 00:07:07.423 sys 0m0.146s 00:07:07.423 16:05:50 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.423 16:05:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:07.423 ************************************ 00:07:07.423 END TEST accel_xor 00:07:07.424 ************************************ 00:07:07.685 16:05:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:07.685 16:05:50 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:07.685 16:05:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.685 16:05:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.685 ************************************ 00:07:07.685 START TEST accel_dif_verify 00:07:07.685 ************************************ 00:07:07.685 16:05:50 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:07.685 16:05:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:07.685 [2024-07-15 16:05:50.461075] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:07.685 [2024-07-15 16:05:50.461151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200870 ] 00:07:07.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.685 [2024-07-15 16:05:50.523165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.685 [2024-07-15 16:05:50.617620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.944 16:05:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:08.880 16:05:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.880 00:07:08.880 real 0m1.415s 00:07:08.880 user 0m1.268s 00:07:08.880 sys 0m0.151s 00:07:08.880 16:05:51 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.880 16:05:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:08.880 ************************************ 00:07:08.880 END TEST accel_dif_verify 00:07:08.880 ************************************ 00:07:09.141 16:05:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.141 16:05:51 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:09.142 16:05:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.142 16:05:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.142 ************************************ 00:07:09.142 START TEST accel_dif_generate 00:07:09.142 ************************************ 00:07:09.142 16:05:51 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:09.142 16:05:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:09.142 [2024-07-15 16:05:51.919540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.142 [2024-07-15 16:05:51.919615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201027 ] 00:07:09.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.142 [2024-07-15 16:05:51.976807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.142 [2024-07-15 16:05:52.060677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.142 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.402 16:05:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:10.341 16:05:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.341 00:07:10.341 real 0m1.390s 00:07:10.341 user 0m1.262s 00:07:10.341 sys 0m0.132s 00:07:10.341 16:05:53 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.341 16:05:53 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:10.341 ************************************ 00:07:10.341 END TEST accel_dif_generate 00:07:10.341 ************************************ 00:07:10.341 16:05:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:10.341 16:05:53 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:10.341 16:05:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.341 16:05:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.612 ************************************ 00:07:10.612 START TEST accel_dif_generate_copy 00:07:10.612 ************************************ 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:10.613 [2024-07-15 16:05:53.355883] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.613 [2024-07-15 16:05:53.355945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201297 ] 00:07:10.613 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.613 [2024-07-15 16:05:53.424606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.613 [2024-07-15 16:05:53.518831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.613 16:05:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.993 00:07:11.993 real 0m1.404s 00:07:11.993 user 0m1.258s 00:07:11.993 sys 0m0.146s 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.993 16:05:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.993 ************************************ 00:07:11.993 END TEST accel_dif_generate_copy 00:07:11.993 ************************************ 00:07:11.993 16:05:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:11.993 16:05:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.993 16:05:54 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:11.993 16:05:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.993 16:05:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.993 ************************************ 00:07:11.993 START TEST accel_comp 00:07:11.993 ************************************ 00:07:11.993 16:05:54 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.993 16:05:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:11.993 [2024-07-15 16:05:54.814289] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:11.993 [2024-07-15 16:05:54.814352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201459 ] 00:07:11.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.993 [2024-07-15 16:05:54.878486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.993 [2024-07-15 16:05:54.971337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:12.253 16:05:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:13.631 16:05:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.631 00:07:13.631 real 0m1.415s 00:07:13.631 user 0m1.271s 00:07:13.631 sys 0m0.148s 00:07:13.631 16:05:56 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.631 16:05:56 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:13.631 ************************************ 00:07:13.631 END TEST accel_comp 00:07:13.631 ************************************ 00:07:13.631 16:05:56 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.631 16:05:56 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:13.631 16:05:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.631 16:05:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.631 ************************************ 00:07:13.631 START TEST accel_decomp 00:07:13.631 ************************************ 00:07:13.631 16:05:56 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:13.631 [2024-07-15 16:05:56.274340] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.631 [2024-07-15 16:05:56.274403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201612 ] 00:07:13.631 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.631 [2024-07-15 16:05:56.339012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.631 [2024-07-15 16:05:56.433410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.631 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.632 16:05:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.049 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.050 16:05:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.050 00:07:15.050 real 0m1.412s 00:07:15.050 user 0m1.268s 00:07:15.050 sys 0m0.148s 00:07:15.050 16:05:57 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.050 16:05:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:15.050 ************************************ 00:07:15.050 END TEST accel_decomp 00:07:15.050 ************************************ 00:07:15.050 16:05:57 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.050 16:05:57 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:15.050 16:05:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.050 16:05:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.050 ************************************ 00:07:15.050 START TEST accel_decmop_full 00:07:15.050 ************************************ 00:07:15.050 16:05:57 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:15.050 [2024-07-15 16:05:57.733587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.050 [2024-07-15 16:05:57.733653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201885 ] 00:07:15.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.050 [2024-07-15 16:05:57.798297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.050 [2024-07-15 16:05:57.892302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.050 16:05:57 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.429 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.430 16:05:59 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.430 00:07:16.430 real 0m1.424s 00:07:16.430 user 0m1.283s 00:07:16.430 sys 0m0.145s 00:07:16.430 16:05:59 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.430 16:05:59 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:16.430 ************************************ 00:07:16.430 END TEST accel_decmop_full 00:07:16.430 ************************************ 00:07:16.430 16:05:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.430 16:05:59 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:16.430 16:05:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.430 16:05:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.430 ************************************ 00:07:16.430 START TEST accel_decomp_mcore 00:07:16.430 ************************************ 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.430 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.430 [2024-07-15 16:05:59.200961] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:16.430 [2024-07-15 16:05:59.201029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202042 ] 00:07:16.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.430 [2024-07-15 16:05:59.265915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.430 [2024-07-15 16:05:59.357730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.430 [2024-07-15 16:05:59.357787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.430 [2024-07-15 16:05:59.357852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.430 [2024-07-15 16:05:59.357855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.690 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.690 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.690 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.690 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.691 16:05:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.627 00:07:17.627 real 0m1.403s 00:07:17.627 user 0m4.669s 00:07:17.627 sys 0m0.160s 00:07:17.627 16:06:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.628 16:06:00 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:17.628 ************************************ 00:07:17.628 END TEST accel_decomp_mcore 00:07:17.628 ************************************ 00:07:17.885 16:06:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.885 16:06:00 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:17.885 16:06:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.885 16:06:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.885 ************************************ 00:07:17.885 START TEST accel_decomp_full_mcore 00:07:17.885 ************************************ 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.885 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:17.886 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:17.886 [2024-07-15 16:06:00.652153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.886 [2024-07-15 16:06:00.652218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202252 ] 00:07:17.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.886 [2024-07-15 16:06:00.716245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.886 [2024-07-15 16:06:00.812882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.886 [2024-07-15 16:06:00.812936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.886 [2024-07-15 16:06:00.813059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.886 [2024-07-15 16:06:00.813062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.144 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:18.145 16:06:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.521 00:07:19.521 real 0m1.433s 00:07:19.521 user 0m4.769s 00:07:19.521 sys 0m0.150s 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.521 16:06:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:19.521 ************************************ 00:07:19.521 END TEST accel_decomp_full_mcore 00:07:19.521 ************************************ 00:07:19.521 16:06:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.521 16:06:02 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:19.521 16:06:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.521 16:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.521 ************************************ 00:07:19.521 START TEST accel_decomp_mthread 00:07:19.521 ************************************ 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:19.521 [2024-07-15 16:06:02.128003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.521 [2024-07-15 16:06:02.128075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202477 ] 00:07:19.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.521 [2024-07-15 16:06:02.191613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.521 [2024-07-15 16:06:02.283419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.521 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.522 16:06:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.898 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.899 00:07:20.899 real 0m1.413s 00:07:20.899 user 0m1.270s 00:07:20.899 sys 0m0.147s 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.899 16:06:03 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 ************************************ 00:07:20.899 END TEST accel_decomp_mthread 00:07:20.899 ************************************ 00:07:20.899 16:06:03 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.899 16:06:03 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:20.899 16:06:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.899 16:06:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 ************************************ 00:07:20.899 START TEST accel_decomp_full_mthread 00:07:20.899 ************************************ 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:20.899 [2024-07-15 16:06:03.586972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.899 [2024-07-15 16:06:03.587046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202749 ] 00:07:20.899 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.899 [2024-07-15 16:06:03.650273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.899 [2024-07-15 16:06:03.744332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.899 16:06:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.280 00:07:22.280 real 0m1.451s 00:07:22.280 user 0m1.310s 00:07:22.280 sys 0m0.145s 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.280 16:06:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.280 ************************************ 00:07:22.280 END TEST accel_decomp_full_mthread 00:07:22.280 ************************************ 00:07:22.280 16:06:05 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:22.280 16:06:05 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.280 16:06:05 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:22.280 16:06:05 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:22.280 16:06:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.280 16:06:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.280 16:06:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.280 16:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.280 16:06:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.280 16:06:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.280 16:06:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.281 16:06:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:22.281 16:06:05 accel -- accel/accel.sh@41 -- # jq -r . 00:07:22.281 ************************************ 00:07:22.281 START TEST accel_dif_functional_tests 00:07:22.281 ************************************ 00:07:22.281 16:06:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.281 [2024-07-15 16:06:05.101403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.281 [2024-07-15 16:06:05.101491] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202911 ] 00:07:22.281 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.281 [2024-07-15 16:06:05.164813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.541 [2024-07-15 16:06:05.261509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.541 [2024-07-15 16:06:05.261575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.541 [2024-07-15 16:06:05.261578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.541 00:07:22.541 00:07:22.541 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.541 http://cunit.sourceforge.net/ 00:07:22.541 00:07:22.541 00:07:22.541 Suite: accel_dif 00:07:22.541 Test: verify: DIF generated, GUARD check ...passed 00:07:22.541 Test: verify: DIF generated, APPTAG check ...passed 00:07:22.541 Test: verify: DIF generated, REFTAG check ...passed 00:07:22.541 Test: verify: DIF not generated, GUARD check ...[2024-07-15 16:06:05.357312] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.541 passed 00:07:22.541 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 16:06:05.357381] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.541 passed 00:07:22.541 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 16:06:05.357430] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.541 passed 00:07:22.541 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:22.541 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 16:06:05.357494] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:22.541 passed 00:07:22.541 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:22.541 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:22.541 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:22.541 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 16:06:05.357628] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:22.541 passed 00:07:22.541 Test: verify copy: DIF generated, GUARD check ...passed 00:07:22.541 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:22.541 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:22.541 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 16:06:05.357813] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.541 passed 00:07:22.541 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 16:06:05.357852] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.541 passed 00:07:22.541 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 16:06:05.357886] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.541 passed 00:07:22.541 Test: generate copy: DIF generated, GUARD check ...passed 00:07:22.541 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:22.541 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:22.541 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:22.541 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:22.541 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:22.541 Test: generate copy: iovecs-len validate ...[2024-07-15 16:06:05.358119] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:22.541 passed 00:07:22.541 Test: generate copy: buffer alignment validate ...passed 00:07:22.541 00:07:22.541 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.541 suites 1 1 n/a 0 0 00:07:22.541 tests 26 26 26 0 0 00:07:22.541 asserts 115 115 115 0 n/a 00:07:22.541 00:07:22.541 Elapsed time = 0.002 seconds 00:07:22.801 00:07:22.801 real 0m0.507s 00:07:22.801 user 0m0.779s 00:07:22.801 sys 0m0.191s 00:07:22.801 16:06:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.801 16:06:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 ************************************ 00:07:22.801 END TEST accel_dif_functional_tests 00:07:22.801 ************************************ 00:07:22.801 00:07:22.801 real 0m31.830s 00:07:22.801 user 0m35.172s 00:07:22.801 sys 0m4.635s 00:07:22.801 16:06:05 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.801 16:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 ************************************ 00:07:22.801 END TEST accel 00:07:22.801 ************************************ 00:07:22.801 16:06:05 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:22.801 16:06:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.801 16:06:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.801 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 ************************************ 00:07:22.801 START TEST accel_rpc 00:07:22.801 ************************************ 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:22.801 * Looking for test storage... 00:07:22.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:22.801 16:06:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.801 16:06:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=203096 00:07:22.801 16:06:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:22.801 16:06:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 203096 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 203096 ']' 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.801 16:06:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.801 [2024-07-15 16:06:05.742694] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.801 [2024-07-15 16:06:05.742787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203096 ] 00:07:22.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.061 [2024-07-15 16:06:05.803246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.061 [2024-07-15 16:06:05.888319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.061 16:06:05 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.061 16:06:05 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.061 16:06:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:23.061 16:06:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:23.061 16:06:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:23.061 16:06:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:23.061 16:06:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:23.061 16:06:05 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.061 16:06:05 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.061 16:06:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.061 ************************************ 00:07:23.061 START TEST accel_assign_opcode 00:07:23.061 ************************************ 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.061 [2024-07-15 16:06:05.968962] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.061 [2024-07-15 16:06:05.976970] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.061 16:06:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.319 software 00:07:23.319 00:07:23.319 real 0m0.291s 00:07:23.319 user 0m0.032s 00:07:23.319 sys 0m0.008s 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.319 16:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.319 ************************************ 00:07:23.319 END TEST accel_assign_opcode 00:07:23.319 ************************************ 00:07:23.319 16:06:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 203096 00:07:23.319 16:06:06 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 203096 ']' 00:07:23.319 16:06:06 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 203096 00:07:23.319 16:06:06 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:23.319 16:06:06 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.319 16:06:06 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 203096 00:07:23.579 16:06:06 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.579 16:06:06 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.579 16:06:06 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 203096' 00:07:23.579 killing process with pid 203096 00:07:23.579 16:06:06 accel_rpc -- common/autotest_common.sh@965 -- # kill 203096 00:07:23.579 16:06:06 accel_rpc -- common/autotest_common.sh@970 -- # wait 203096 00:07:23.838 00:07:23.838 real 0m1.077s 00:07:23.838 user 0m0.978s 00:07:23.838 sys 0m0.436s 00:07:23.838 16:06:06 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.838 16:06:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.838 ************************************ 00:07:23.838 END TEST accel_rpc 00:07:23.838 ************************************ 00:07:23.838 16:06:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.838 16:06:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.838 16:06:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.838 16:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:23.838 ************************************ 00:07:23.838 START TEST app_cmdline 00:07:23.838 ************************************ 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.838 * Looking for test storage... 00:07:23.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.838 16:06:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.838 16:06:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=203285 00:07:23.838 16:06:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.838 16:06:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 203285 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 203285 ']' 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.838 16:06:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.098 [2024-07-15 16:06:06.864440] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:24.098 [2024-07-15 16:06:06.864532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203285 ] 00:07:24.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.098 [2024-07-15 16:06:06.924227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.098 [2024-07-15 16:06:07.009850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.357 16:06:07 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.357 16:06:07 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:24.357 16:06:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:24.616 { 00:07:24.616 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:24.616 "fields": { 00:07:24.616 "major": 24, 00:07:24.616 "minor": 5, 00:07:24.616 "patch": 1, 00:07:24.616 "suffix": "-pre", 00:07:24.616 "commit": "5fa2f5086" 00:07:24.616 } 00:07:24.616 } 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.616 16:06:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.616 16:06:07 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.874 request: 00:07:24.874 { 00:07:24.874 "method": "env_dpdk_get_mem_stats", 00:07:24.874 "req_id": 1 00:07:24.874 } 00:07:24.874 Got JSON-RPC error response 00:07:24.874 response: 00:07:24.874 { 00:07:24.874 "code": -32601, 00:07:24.874 "message": "Method not found" 00:07:24.874 } 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:24.874 16:06:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 203285 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 203285 ']' 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 203285 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 203285 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 203285' 00:07:24.874 killing process with pid 203285 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@965 -- # kill 203285 00:07:24.874 16:06:07 app_cmdline -- common/autotest_common.sh@970 -- # wait 203285 00:07:25.441 00:07:25.441 real 0m1.470s 00:07:25.441 user 0m1.812s 00:07:25.441 sys 0m0.459s 00:07:25.441 16:06:08 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.441 16:06:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.441 ************************************ 00:07:25.441 END TEST app_cmdline 00:07:25.441 ************************************ 00:07:25.441 16:06:08 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.441 16:06:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.441 16:06:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.441 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:25.441 ************************************ 00:07:25.441 START TEST version 00:07:25.441 ************************************ 00:07:25.441 16:06:08 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:25.441 * Looking for test storage... 00:07:25.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.441 16:06:08 version -- app/version.sh@17 -- # get_header_version major 00:07:25.441 16:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # cut -f2 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.441 16:06:08 version -- app/version.sh@17 -- # major=24 00:07:25.441 16:06:08 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.441 16:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # cut -f2 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.441 16:06:08 version -- app/version.sh@18 -- # minor=5 00:07:25.441 16:06:08 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.441 16:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # cut -f2 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.441 16:06:08 version -- app/version.sh@19 -- # patch=1 00:07:25.441 16:06:08 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.441 16:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # cut -f2 00:07:25.441 16:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.441 16:06:08 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.441 16:06:08 version -- app/version.sh@22 -- # version=24.5 00:07:25.441 16:06:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.441 16:06:08 version -- app/version.sh@25 -- # version=24.5.1 00:07:25.441 16:06:08 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:25.441 16:06:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:25.441 16:06:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.441 16:06:08 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:25.441 16:06:08 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:25.441 00:07:25.441 real 0m0.111s 00:07:25.441 user 0m0.057s 00:07:25.441 sys 0m0.075s 00:07:25.441 16:06:08 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.441 16:06:08 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.441 ************************************ 00:07:25.441 END TEST version 00:07:25.441 ************************************ 00:07:25.441 16:06:08 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:25.441 16:06:08 -- spdk/autotest.sh@198 -- # uname -s 00:07:25.701 16:06:08 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:25.701 16:06:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:25.701 16:06:08 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:25.701 16:06:08 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:25.701 16:06:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.701 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:25.701 16:06:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:25.701 16:06:08 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:25.701 16:06:08 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.701 16:06:08 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:25.701 16:06:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.701 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:07:25.701 ************************************ 00:07:25.701 START TEST nvmf_tcp 00:07:25.701 ************************************ 00:07:25.701 16:06:08 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.701 * Looking for test storage... 00:07:25.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.701 16:06:08 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.701 16:06:08 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.701 16:06:08 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.701 16:06:08 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.701 16:06:08 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:25.702 16:06:08 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:25.702 16:06:08 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:25.702 16:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:25.702 16:06:08 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:25.702 16:06:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:25.702 16:06:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.702 16:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.702 ************************************ 00:07:25.702 START TEST nvmf_example 00:07:25.702 ************************************ 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:25.702 * Looking for test storage... 00:07:25.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.702 16:06:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:28.236 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:28.236 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:28.236 Found net devices under 0000:84:00.0: cvl_0_0 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:28.236 Found net devices under 0000:84:00.1: cvl_0_1 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:07:28.236 00:07:28.236 --- 10.0.0.2 ping statistics --- 00:07:28.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.236 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:28.236 00:07:28.236 --- 10.0.0.1 ping statistics --- 00:07:28.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.236 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=205718 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 205718 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 205718 ']' 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:28.236 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.237 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:28.237 16:06:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.237 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.804 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:29.062 16:06:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:29.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.315 Initializing NVMe Controllers 00:07:41.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:41.315 Initialization complete. Launching workers. 00:07:41.315 ======================================================== 00:07:41.315 Latency(us) 00:07:41.315 Device Information : IOPS MiB/s Average min max 00:07:41.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14749.44 57.62 4339.43 857.80 16536.31 00:07:41.315 ======================================================== 00:07:41.315 Total : 14749.44 57.62 4339.43 857.80 16536.31 00:07:41.315 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.315 rmmod nvme_tcp 00:07:41.315 rmmod nvme_fabrics 00:07:41.315 rmmod nvme_keyring 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 205718 ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 205718 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 205718 ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 205718 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 205718 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 205718' 00:07:41.315 killing process with pid 205718 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 205718 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 205718 00:07:41.315 nvmf threads initialize successfully 00:07:41.315 bdev subsystem init successfully 00:07:41.315 created a nvmf target service 00:07:41.315 create targets's poll groups done 00:07:41.315 all subsystems of target started 00:07:41.315 nvmf target is running 00:07:41.315 all subsystems of target stopped 00:07:41.315 destroy targets's poll groups done 00:07:41.315 destroyed the nvmf target service 00:07:41.315 bdev subsystem finish successfully 00:07:41.315 nvmf threads destroy successfully 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.315 16:06:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.574 00:07:41.574 real 0m15.950s 00:07:41.574 user 0m44.995s 00:07:41.574 sys 0m3.629s 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.574 16:06:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.574 ************************************ 00:07:41.574 END TEST nvmf_example 00:07:41.574 ************************************ 00:07:41.574 16:06:24 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.574 16:06:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:41.574 16:06:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.574 16:06:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.574 ************************************ 00:07:41.574 START TEST nvmf_filesystem 00:07:41.574 ************************************ 00:07:41.574 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.836 * Looking for test storage... 00:07:41.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:41.836 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:41.837 #define SPDK_CONFIG_H 00:07:41.837 #define SPDK_CONFIG_APPS 1 00:07:41.837 #define SPDK_CONFIG_ARCH native 00:07:41.837 #undef SPDK_CONFIG_ASAN 00:07:41.837 #undef SPDK_CONFIG_AVAHI 00:07:41.837 #undef SPDK_CONFIG_CET 00:07:41.837 #define SPDK_CONFIG_COVERAGE 1 00:07:41.837 #define SPDK_CONFIG_CROSS_PREFIX 00:07:41.837 #undef SPDK_CONFIG_CRYPTO 00:07:41.837 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:41.837 #undef SPDK_CONFIG_CUSTOMOCF 00:07:41.837 #undef SPDK_CONFIG_DAOS 00:07:41.837 #define SPDK_CONFIG_DAOS_DIR 00:07:41.837 #define SPDK_CONFIG_DEBUG 1 00:07:41.837 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:41.837 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.837 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:41.837 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.837 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:41.837 #undef SPDK_CONFIG_DPDK_UADK 00:07:41.837 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.837 #define SPDK_CONFIG_EXAMPLES 1 00:07:41.837 #undef SPDK_CONFIG_FC 00:07:41.837 #define SPDK_CONFIG_FC_PATH 00:07:41.837 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:41.837 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:41.837 #undef SPDK_CONFIG_FUSE 00:07:41.837 #undef SPDK_CONFIG_FUZZER 00:07:41.837 #define SPDK_CONFIG_FUZZER_LIB 00:07:41.837 #undef SPDK_CONFIG_GOLANG 00:07:41.837 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:41.837 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:41.837 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:41.837 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:41.837 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:41.837 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:41.837 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:41.837 #define SPDK_CONFIG_IDXD 1 00:07:41.837 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:41.837 #undef SPDK_CONFIG_IPSEC_MB 00:07:41.837 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:41.837 #define SPDK_CONFIG_ISAL 1 00:07:41.837 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:41.837 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:41.837 #define SPDK_CONFIG_LIBDIR 00:07:41.837 #undef SPDK_CONFIG_LTO 00:07:41.837 #define SPDK_CONFIG_MAX_LCORES 00:07:41.837 #define SPDK_CONFIG_NVME_CUSE 1 00:07:41.837 #undef SPDK_CONFIG_OCF 00:07:41.837 #define SPDK_CONFIG_OCF_PATH 00:07:41.837 #define SPDK_CONFIG_OPENSSL_PATH 00:07:41.837 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:41.837 #define SPDK_CONFIG_PGO_DIR 00:07:41.837 #undef SPDK_CONFIG_PGO_USE 00:07:41.837 #define SPDK_CONFIG_PREFIX /usr/local 00:07:41.837 #undef SPDK_CONFIG_RAID5F 00:07:41.837 #undef SPDK_CONFIG_RBD 00:07:41.837 #define SPDK_CONFIG_RDMA 1 00:07:41.837 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:41.837 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:41.837 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:41.837 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:41.837 #define SPDK_CONFIG_SHARED 1 00:07:41.837 #undef SPDK_CONFIG_SMA 00:07:41.837 #define SPDK_CONFIG_TESTS 1 00:07:41.837 #undef SPDK_CONFIG_TSAN 00:07:41.837 #define SPDK_CONFIG_UBLK 1 00:07:41.837 #define SPDK_CONFIG_UBSAN 1 00:07:41.837 #undef SPDK_CONFIG_UNIT_TESTS 00:07:41.837 #undef SPDK_CONFIG_URING 00:07:41.837 #define SPDK_CONFIG_URING_PATH 00:07:41.837 #undef SPDK_CONFIG_URING_ZNS 00:07:41.837 #undef SPDK_CONFIG_USDT 00:07:41.837 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:41.837 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:41.837 #define SPDK_CONFIG_VFIO_USER 1 00:07:41.837 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:41.837 #define SPDK_CONFIG_VHOST 1 00:07:41.837 #define SPDK_CONFIG_VIRTIO 1 00:07:41.837 #undef SPDK_CONFIG_VTUNE 00:07:41.837 #define SPDK_CONFIG_VTUNE_DIR 00:07:41.837 #define SPDK_CONFIG_WERROR 1 00:07:41.837 #define SPDK_CONFIG_WPDK_DIR 00:07:41.837 #undef SPDK_CONFIG_XNVME 00:07:41.837 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:41.837 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:41.838 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 207425 ]] 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 207425 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:41.839 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.44kW8F 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.44kW8F/tests/target /tmp/spdk.44kW8F 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=949354496 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4335075328 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=36053213184 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=45083312128 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9030098944 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=22538280960 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=22541656064 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=9007878144 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=9016664064 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8785920 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=22540959744 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=22541656064 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=696320 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4508323840 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4508327936 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:41.840 * Looking for test storage... 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=36053213184 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11244691456 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.840 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.841 16:06:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:43.812 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:43.812 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:43.812 Found net devices under 0000:84:00.0: cvl_0_0 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:43.812 Found net devices under 0000:84:00.1: cvl_0_1 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.812 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:07:43.813 00:07:43.813 --- 10.0.0.2 ping statistics --- 00:07:43.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.813 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:07:43.813 00:07:43.813 --- 10.0.0.1 ping statistics --- 00:07:43.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.813 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.813 ************************************ 00:07:43.813 START TEST nvmf_filesystem_no_in_capsule 00:07:43.813 ************************************ 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=209073 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 209073 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 209073 ']' 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:43.813 16:06:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.071 [2024-07-15 16:06:26.827287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:44.071 [2024-07-15 16:06:26.827384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.071 [2024-07-15 16:06:26.892681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.071 [2024-07-15 16:06:26.984006] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.071 [2024-07-15 16:06:26.984088] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.071 [2024-07-15 16:06:26.984102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.071 [2024-07-15 16:06:26.984114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.071 [2024-07-15 16:06:26.984123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.071 [2024-07-15 16:06:26.984240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.071 [2024-07-15 16:06:26.984268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.071 [2024-07-15 16:06:26.984330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.071 [2024-07-15 16:06:26.984332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.329 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:44.329 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:44.329 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.329 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.329 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 [2024-07-15 16:06:27.132518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 Malloc1 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.330 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.330 [2024-07-15 16:06:27.308004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:44.590 { 00:07:44.590 "name": "Malloc1", 00:07:44.590 "aliases": [ 00:07:44.590 "2ccb1661-3769-4e7a-8a0e-3035a51fd72f" 00:07:44.590 ], 00:07:44.590 "product_name": "Malloc disk", 00:07:44.590 "block_size": 512, 00:07:44.590 "num_blocks": 1048576, 00:07:44.590 "uuid": "2ccb1661-3769-4e7a-8a0e-3035a51fd72f", 00:07:44.590 "assigned_rate_limits": { 00:07:44.590 "rw_ios_per_sec": 0, 00:07:44.590 "rw_mbytes_per_sec": 0, 00:07:44.590 "r_mbytes_per_sec": 0, 00:07:44.590 "w_mbytes_per_sec": 0 00:07:44.590 }, 00:07:44.590 "claimed": true, 00:07:44.590 "claim_type": "exclusive_write", 00:07:44.590 "zoned": false, 00:07:44.590 "supported_io_types": { 00:07:44.590 "read": true, 00:07:44.590 "write": true, 00:07:44.590 "unmap": true, 00:07:44.590 "write_zeroes": true, 00:07:44.590 "flush": true, 00:07:44.590 "reset": true, 00:07:44.590 "compare": false, 00:07:44.590 "compare_and_write": false, 00:07:44.590 "abort": true, 00:07:44.590 "nvme_admin": false, 00:07:44.590 "nvme_io": false 00:07:44.590 }, 00:07:44.590 "memory_domains": [ 00:07:44.590 { 00:07:44.590 "dma_device_id": "system", 00:07:44.590 "dma_device_type": 1 00:07:44.590 }, 00:07:44.590 { 00:07:44.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.590 "dma_device_type": 2 00:07:44.590 } 00:07:44.590 ], 00:07:44.590 "driver_specific": {} 00:07:44.590 } 00:07:44.590 ]' 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:44.590 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:44.591 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:44.591 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.591 16:06:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.161 16:06:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.161 16:06:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:45.161 16:06:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.161 16:06:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:45.161 16:06:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.696 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:47.955 16:06:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.893 ************************************ 00:07:48.893 START TEST filesystem_ext4 00:07:48.893 ************************************ 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:48.893 16:06:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:48.893 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.151 Discarding device blocks: 0/522240 done 00:07:49.151 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.151 Filesystem UUID: b7681170-4fec-4c2e-be34-cb8ec60eb8ce 00:07:49.151 Superblock backups stored on blocks: 00:07:49.151 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.151 00:07:49.151 Allocating group tables: 0/64 done 00:07:49.151 Writing inode tables: 0/64 done 00:07:49.151 Creating journal (8192 blocks): done 00:07:49.151 Writing superblocks and filesystem accounting information: 0/64 done 00:07:49.151 00:07:49.151 16:06:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:49.151 16:06:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.086 16:06:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 209073 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.086 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.346 00:07:50.346 real 0m1.206s 00:07:50.346 user 0m0.011s 00:07:50.346 sys 0m0.054s 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:50.346 ************************************ 00:07:50.346 END TEST filesystem_ext4 00:07:50.346 ************************************ 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.346 ************************************ 00:07:50.346 START TEST filesystem_btrfs 00:07:50.346 ************************************ 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:50.346 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:50.347 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:50.347 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:50.347 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.612 btrfs-progs v6.6.2 00:07:50.612 See https://btrfs.readthedocs.io for more information. 00:07:50.612 00:07:50.612 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.612 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.612 this does not affect your deployments: 00:07:50.612 - DUP for metadata (-m dup) 00:07:50.612 - enabled no-holes (-O no-holes) 00:07:50.612 - enabled free-space-tree (-R free-space-tree) 00:07:50.612 00:07:50.612 Label: (null) 00:07:50.612 UUID: 5cb1440b-f4e3-4545-8352-af1e45f7ddae 00:07:50.612 Node size: 16384 00:07:50.612 Sector size: 4096 00:07:50.612 Filesystem size: 510.00MiB 00:07:50.612 Block group profiles: 00:07:50.612 Data: single 8.00MiB 00:07:50.612 Metadata: DUP 32.00MiB 00:07:50.612 System: DUP 8.00MiB 00:07:50.612 SSD detected: yes 00:07:50.612 Zoned device: no 00:07:50.612 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.612 Runtime features: free-space-tree 00:07:50.612 Checksum: crc32c 00:07:50.612 Number of devices: 1 00:07:50.612 Devices: 00:07:50.612 ID SIZE PATH 00:07:50.612 1 510.00MiB /dev/nvme0n1p1 00:07:50.612 00:07:50.612 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:50.612 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 209073 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.871 00:07:50.871 real 0m0.717s 00:07:50.871 user 0m0.016s 00:07:50.871 sys 0m0.121s 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.871 ************************************ 00:07:50.871 END TEST filesystem_btrfs 00:07:50.871 ************************************ 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.871 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.129 ************************************ 00:07:51.129 START TEST filesystem_xfs 00:07:51.129 ************************************ 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:51.129 16:06:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:51.129 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:51.129 = sectsz=512 attr=2, projid32bit=1 00:07:51.129 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:51.129 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:51.129 data = bsize=4096 blocks=130560, imaxpct=25 00:07:51.129 = sunit=0 swidth=0 blks 00:07:51.129 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:51.129 log =internal log bsize=4096 blocks=16384, version=2 00:07:51.129 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:51.129 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.067 Discarding blocks...Done. 00:07:52.067 16:06:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:52.067 16:06:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 209073 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.602 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.602 00:07:54.602 real 0m3.231s 00:07:54.602 user 0m0.019s 00:07:54.602 sys 0m0.057s 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:54.603 ************************************ 00:07:54.603 END TEST filesystem_xfs 00:07:54.603 ************************************ 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 209073 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 209073 ']' 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 209073 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 209073 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 209073' 00:07:54.603 killing process with pid 209073 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 209073 00:07:54.603 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 209073 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:54.860 00:07:54.860 real 0m10.964s 00:07:54.860 user 0m42.041s 00:07:54.860 sys 0m1.674s 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 ************************************ 00:07:54.860 END TEST nvmf_filesystem_no_in_capsule 00:07:54.860 ************************************ 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 ************************************ 00:07:54.860 START TEST nvmf_filesystem_in_capsule 00:07:54.860 ************************************ 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=210622 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 210622 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 210622 ']' 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.860 16:06:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.119 [2024-07-15 16:06:37.844148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:55.119 [2024-07-15 16:06:37.844230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.119 [2024-07-15 16:06:37.912436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.119 [2024-07-15 16:06:38.003618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.119 [2024-07-15 16:06:38.003682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.119 [2024-07-15 16:06:38.003698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.119 [2024-07-15 16:06:38.003712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.119 [2024-07-15 16:06:38.003723] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.119 [2024-07-15 16:06:38.003796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.119 [2024-07-15 16:06:38.003831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.119 [2024-07-15 16:06:38.003947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.119 [2024-07-15 16:06:38.003949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 [2024-07-15 16:06:38.165630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 Malloc1 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.378 [2024-07-15 16:06:38.352236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.378 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:55.636 { 00:07:55.636 "name": "Malloc1", 00:07:55.636 "aliases": [ 00:07:55.636 "8aca0715-130e-4f8d-a511-3042fee8afc5" 00:07:55.636 ], 00:07:55.636 "product_name": "Malloc disk", 00:07:55.636 "block_size": 512, 00:07:55.636 "num_blocks": 1048576, 00:07:55.636 "uuid": "8aca0715-130e-4f8d-a511-3042fee8afc5", 00:07:55.636 "assigned_rate_limits": { 00:07:55.636 "rw_ios_per_sec": 0, 00:07:55.636 "rw_mbytes_per_sec": 0, 00:07:55.636 "r_mbytes_per_sec": 0, 00:07:55.636 "w_mbytes_per_sec": 0 00:07:55.636 }, 00:07:55.636 "claimed": true, 00:07:55.636 "claim_type": "exclusive_write", 00:07:55.636 "zoned": false, 00:07:55.636 "supported_io_types": { 00:07:55.636 "read": true, 00:07:55.636 "write": true, 00:07:55.636 "unmap": true, 00:07:55.636 "write_zeroes": true, 00:07:55.636 "flush": true, 00:07:55.636 "reset": true, 00:07:55.636 "compare": false, 00:07:55.636 "compare_and_write": false, 00:07:55.636 "abort": true, 00:07:55.636 "nvme_admin": false, 00:07:55.636 "nvme_io": false 00:07:55.636 }, 00:07:55.636 "memory_domains": [ 00:07:55.636 { 00:07:55.636 "dma_device_id": "system", 00:07:55.636 "dma_device_type": 1 00:07:55.636 }, 00:07:55.636 { 00:07:55.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.636 "dma_device_type": 2 00:07:55.636 } 00:07:55.636 ], 00:07:55.636 "driver_specific": {} 00:07:55.636 } 00:07:55.636 ]' 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.636 16:06:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.199 16:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.199 16:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:56.199 16:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.199 16:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:56.199 16:06:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:58.731 16:06:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:59.668 16:06:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.605 ************************************ 00:08:00.605 START TEST filesystem_in_capsule_ext4 00:08:00.605 ************************************ 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:00.605 16:06:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:00.605 mke2fs 1.46.5 (30-Dec-2021) 00:08:00.605 Discarding device blocks: 0/522240 done 00:08:00.605 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:00.605 Filesystem UUID: 8cb31529-19dc-4305-b723-fcc8ce4d583a 00:08:00.605 Superblock backups stored on blocks: 00:08:00.605 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:00.605 00:08:00.605 Allocating group tables: 0/64 done 00:08:00.605 Writing inode tables: 0/64 done 00:08:00.863 Creating journal (8192 blocks): done 00:08:01.688 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:01.688 00:08:01.688 16:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:01.688 16:06:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 210622 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.649 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.650 00:08:02.650 real 0m2.106s 00:08:02.650 user 0m0.010s 00:08:02.650 sys 0m0.061s 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:02.650 ************************************ 00:08:02.650 END TEST filesystem_in_capsule_ext4 00:08:02.650 ************************************ 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.650 ************************************ 00:08:02.650 START TEST filesystem_in_capsule_btrfs 00:08:02.650 ************************************ 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:02.650 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:02.909 btrfs-progs v6.6.2 00:08:02.909 See https://btrfs.readthedocs.io for more information. 00:08:02.909 00:08:02.909 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:02.909 NOTE: several default settings have changed in version 5.15, please make sure 00:08:02.909 this does not affect your deployments: 00:08:02.909 - DUP for metadata (-m dup) 00:08:02.909 - enabled no-holes (-O no-holes) 00:08:02.909 - enabled free-space-tree (-R free-space-tree) 00:08:02.909 00:08:02.909 Label: (null) 00:08:02.909 UUID: 4aa72808-0644-43b9-a3e9-dc80ec5ca83b 00:08:02.909 Node size: 16384 00:08:02.909 Sector size: 4096 00:08:02.909 Filesystem size: 510.00MiB 00:08:02.909 Block group profiles: 00:08:02.909 Data: single 8.00MiB 00:08:02.909 Metadata: DUP 32.00MiB 00:08:02.909 System: DUP 8.00MiB 00:08:02.909 SSD detected: yes 00:08:02.909 Zoned device: no 00:08:02.909 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:02.909 Runtime features: free-space-tree 00:08:02.909 Checksum: crc32c 00:08:02.909 Number of devices: 1 00:08:02.909 Devices: 00:08:02.909 ID SIZE PATH 00:08:02.909 1 510.00MiB /dev/nvme0n1p1 00:08:02.909 00:08:02.909 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:02.909 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.169 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.169 16:06:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 210622 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.169 00:08:03.169 real 0m0.484s 00:08:03.169 user 0m0.019s 00:08:03.169 sys 0m0.108s 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 ************************************ 00:08:03.169 END TEST filesystem_in_capsule_btrfs 00:08:03.169 ************************************ 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 ************************************ 00:08:03.169 START TEST filesystem_in_capsule_xfs 00:08:03.169 ************************************ 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:03.169 16:06:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:03.428 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:03.428 = sectsz=512 attr=2, projid32bit=1 00:08:03.428 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:03.428 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:03.428 data = bsize=4096 blocks=130560, imaxpct=25 00:08:03.428 = sunit=0 swidth=0 blks 00:08:03.428 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:03.428 log =internal log bsize=4096 blocks=16384, version=2 00:08:03.428 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:03.428 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:04.363 Discarding blocks...Done. 00:08:04.363 16:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:04.363 16:06:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.900 00:08:06.900 real 0m3.478s 00:08:06.900 user 0m0.011s 00:08:06.900 sys 0m0.064s 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.900 ************************************ 00:08:06.900 END TEST filesystem_in_capsule_xfs 00:08:06.900 ************************************ 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:06.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 210622 ']' 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 210622' 00:08:06.900 killing process with pid 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 210622 00:08:06.900 16:06:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 210622 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:07.470 00:08:07.470 real 0m12.427s 00:08:07.470 user 0m47.775s 00:08:07.470 sys 0m1.779s 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 ************************************ 00:08:07.470 END TEST nvmf_filesystem_in_capsule 00:08:07.470 ************************************ 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.470 rmmod nvme_tcp 00:08:07.470 rmmod nvme_fabrics 00:08:07.470 rmmod nvme_keyring 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.470 16:06:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.372 16:06:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.372 00:08:09.372 real 0m27.796s 00:08:09.372 user 1m30.643s 00:08:09.372 sys 0m5.033s 00:08:09.372 16:06:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.372 16:06:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:09.372 ************************************ 00:08:09.372 END TEST nvmf_filesystem 00:08:09.372 ************************************ 00:08:09.631 16:06:52 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.631 16:06:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:09.631 16:06:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.631 16:06:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.631 ************************************ 00:08:09.631 START TEST nvmf_target_discovery 00:08:09.631 ************************************ 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:09.631 * Looking for test storage... 00:08:09.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.631 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.632 16:06:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:11.561 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:11.561 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.561 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:11.562 Found net devices under 0000:84:00.0: cvl_0_0 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:11.562 Found net devices under 0000:84:00.1: cvl_0_1 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.562 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:08:11.820 00:08:11.820 --- 10.0.0.2 ping statistics --- 00:08:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.820 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:08:11.820 00:08:11.820 --- 10.0.0.1 ping statistics --- 00:08:11.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.820 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=214202 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 214202 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 214202 ']' 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:11.820 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:11.820 [2024-07-15 16:06:54.670193] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:11.820 [2024-07-15 16:06:54.670264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.820 [2024-07-15 16:06:54.739673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.080 [2024-07-15 16:06:54.831965] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.080 [2024-07-15 16:06:54.832028] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.080 [2024-07-15 16:06:54.832045] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.080 [2024-07-15 16:06:54.832058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.080 [2024-07-15 16:06:54.832077] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.080 [2024-07-15 16:06:54.832159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.080 [2024-07-15 16:06:54.832194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.080 [2024-07-15 16:06:54.832322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.080 [2024-07-15 16:06:54.832324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 [2024-07-15 16:06:54.989584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 Null1 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 [2024-07-15 16:06:55.029897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 Null2 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.080 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 Null3 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 Null4 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.340 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:08:12.341 00:08:12.341 Discovery Log Number of Records 6, Generation counter 6 00:08:12.341 =====Discovery Log Entry 0====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: current discovery subsystem 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4420 00:08:12.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: explicit discovery connections, duplicate discovery information 00:08:12.341 sectype: none 00:08:12.341 =====Discovery Log Entry 1====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: nvme subsystem 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4420 00:08:12.341 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: none 00:08:12.341 sectype: none 00:08:12.341 =====Discovery Log Entry 2====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: nvme subsystem 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4420 00:08:12.341 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: none 00:08:12.341 sectype: none 00:08:12.341 =====Discovery Log Entry 3====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: nvme subsystem 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4420 00:08:12.341 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: none 00:08:12.341 sectype: none 00:08:12.341 =====Discovery Log Entry 4====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: nvme subsystem 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4420 00:08:12.341 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: none 00:08:12.341 sectype: none 00:08:12.341 =====Discovery Log Entry 5====== 00:08:12.341 trtype: tcp 00:08:12.341 adrfam: ipv4 00:08:12.341 subtype: discovery subsystem referral 00:08:12.341 treq: not required 00:08:12.341 portid: 0 00:08:12.341 trsvcid: 4430 00:08:12.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:12.341 traddr: 10.0.0.2 00:08:12.341 eflags: none 00:08:12.341 sectype: none 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:12.341 Perform nvmf subsystem discovery via RPC 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.341 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.341 [ 00:08:12.341 { 00:08:12.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:12.341 "subtype": "Discovery", 00:08:12.341 "listen_addresses": [ 00:08:12.341 { 00:08:12.341 "trtype": "TCP", 00:08:12.341 "adrfam": "IPv4", 00:08:12.341 "traddr": "10.0.0.2", 00:08:12.341 "trsvcid": "4420" 00:08:12.602 } 00:08:12.602 ], 00:08:12.602 "allow_any_host": true, 00:08:12.602 "hosts": [] 00:08:12.602 }, 00:08:12.602 { 00:08:12.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.602 "subtype": "NVMe", 00:08:12.602 "listen_addresses": [ 00:08:12.602 { 00:08:12.602 "trtype": "TCP", 00:08:12.602 "adrfam": "IPv4", 00:08:12.602 "traddr": "10.0.0.2", 00:08:12.602 "trsvcid": "4420" 00:08:12.602 } 00:08:12.602 ], 00:08:12.602 "allow_any_host": true, 00:08:12.602 "hosts": [], 00:08:12.602 "serial_number": "SPDK00000000000001", 00:08:12.602 "model_number": "SPDK bdev Controller", 00:08:12.602 "max_namespaces": 32, 00:08:12.602 "min_cntlid": 1, 00:08:12.602 "max_cntlid": 65519, 00:08:12.602 "namespaces": [ 00:08:12.602 { 00:08:12.602 "nsid": 1, 00:08:12.602 "bdev_name": "Null1", 00:08:12.602 "name": "Null1", 00:08:12.602 "nguid": "91DB90116F944A15B3050122501D7D43", 00:08:12.602 "uuid": "91db9011-6f94-4a15-b305-0122501d7d43" 00:08:12.602 } 00:08:12.602 ] 00:08:12.602 }, 00:08:12.602 { 00:08:12.602 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:12.602 "subtype": "NVMe", 00:08:12.602 "listen_addresses": [ 00:08:12.602 { 00:08:12.602 "trtype": "TCP", 00:08:12.602 "adrfam": "IPv4", 00:08:12.603 "traddr": "10.0.0.2", 00:08:12.603 "trsvcid": "4420" 00:08:12.603 } 00:08:12.603 ], 00:08:12.603 "allow_any_host": true, 00:08:12.603 "hosts": [], 00:08:12.603 "serial_number": "SPDK00000000000002", 00:08:12.603 "model_number": "SPDK bdev Controller", 00:08:12.603 "max_namespaces": 32, 00:08:12.603 "min_cntlid": 1, 00:08:12.603 "max_cntlid": 65519, 00:08:12.603 "namespaces": [ 00:08:12.603 { 00:08:12.603 "nsid": 1, 00:08:12.603 "bdev_name": "Null2", 00:08:12.603 "name": "Null2", 00:08:12.603 "nguid": "A670D4F7246E4E348835CE451D93134F", 00:08:12.603 "uuid": "a670d4f7-246e-4e34-8835-ce451d93134f" 00:08:12.603 } 00:08:12.603 ] 00:08:12.603 }, 00:08:12.603 { 00:08:12.603 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:12.603 "subtype": "NVMe", 00:08:12.603 "listen_addresses": [ 00:08:12.603 { 00:08:12.603 "trtype": "TCP", 00:08:12.603 "adrfam": "IPv4", 00:08:12.603 "traddr": "10.0.0.2", 00:08:12.603 "trsvcid": "4420" 00:08:12.603 } 00:08:12.603 ], 00:08:12.603 "allow_any_host": true, 00:08:12.603 "hosts": [], 00:08:12.603 "serial_number": "SPDK00000000000003", 00:08:12.603 "model_number": "SPDK bdev Controller", 00:08:12.603 "max_namespaces": 32, 00:08:12.603 "min_cntlid": 1, 00:08:12.603 "max_cntlid": 65519, 00:08:12.603 "namespaces": [ 00:08:12.603 { 00:08:12.603 "nsid": 1, 00:08:12.603 "bdev_name": "Null3", 00:08:12.603 "name": "Null3", 00:08:12.603 "nguid": "9EC769FAAEE647F58F3302633505354A", 00:08:12.603 "uuid": "9ec769fa-aee6-47f5-8f33-02633505354a" 00:08:12.603 } 00:08:12.603 ] 00:08:12.603 }, 00:08:12.603 { 00:08:12.603 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:12.603 "subtype": "NVMe", 00:08:12.603 "listen_addresses": [ 00:08:12.603 { 00:08:12.603 "trtype": "TCP", 00:08:12.603 "adrfam": "IPv4", 00:08:12.603 "traddr": "10.0.0.2", 00:08:12.603 "trsvcid": "4420" 00:08:12.603 } 00:08:12.603 ], 00:08:12.603 "allow_any_host": true, 00:08:12.603 "hosts": [], 00:08:12.603 "serial_number": "SPDK00000000000004", 00:08:12.603 "model_number": "SPDK bdev Controller", 00:08:12.603 "max_namespaces": 32, 00:08:12.603 "min_cntlid": 1, 00:08:12.603 "max_cntlid": 65519, 00:08:12.603 "namespaces": [ 00:08:12.603 { 00:08:12.603 "nsid": 1, 00:08:12.603 "bdev_name": "Null4", 00:08:12.603 "name": "Null4", 00:08:12.603 "nguid": "332A3217EC6243ECA494F1F04A78BB81", 00:08:12.603 "uuid": "332a3217-ec62-43ec-a494-f1f04a78bb81" 00:08:12.603 } 00:08:12.603 ] 00:08:12.603 } 00:08:12.603 ] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.603 rmmod nvme_tcp 00:08:12.603 rmmod nvme_fabrics 00:08:12.603 rmmod nvme_keyring 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 214202 ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 214202 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 214202 ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 214202 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 214202 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 214202' 00:08:12.603 killing process with pid 214202 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 214202 00:08:12.603 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 214202 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.864 16:06:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.400 16:06:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.400 00:08:15.400 real 0m5.412s 00:08:15.400 user 0m4.435s 00:08:15.400 sys 0m1.826s 00:08:15.400 16:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.400 16:06:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.400 ************************************ 00:08:15.400 END TEST nvmf_target_discovery 00:08:15.400 ************************************ 00:08:15.400 16:06:57 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:15.400 16:06:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:15.400 16:06:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.400 16:06:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.400 ************************************ 00:08:15.400 START TEST nvmf_referrals 00:08:15.400 ************************************ 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:15.400 * Looking for test storage... 00:08:15.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.400 16:06:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.302 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:17.303 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:17.303 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.303 16:06:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:17.303 Found net devices under 0000:84:00.0: cvl_0_0 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:17.303 Found net devices under 0000:84:00.1: cvl_0_1 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:08:17.303 00:08:17.303 --- 10.0.0.2 ping statistics --- 00:08:17.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.303 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:08:17.303 00:08:17.303 --- 10.0.0.1 ping statistics --- 00:08:17.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.303 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=216233 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 216233 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 216233 ']' 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.303 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.303 [2024-07-15 16:07:00.215127] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:17.303 [2024-07-15 16:07:00.215208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.303 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.561 [2024-07-15 16:07:00.286481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.561 [2024-07-15 16:07:00.380127] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.561 [2024-07-15 16:07:00.380188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.561 [2024-07-15 16:07:00.380204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.561 [2024-07-15 16:07:00.380218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.561 [2024-07-15 16:07:00.380229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.562 [2024-07-15 16:07:00.380325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.562 [2024-07-15 16:07:00.380384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.562 [2024-07-15 16:07:00.380477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.562 [2024-07-15 16:07:00.380479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.562 [2024-07-15 16:07:00.522509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.562 [2024-07-15 16:07:00.534759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.562 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.821 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:18.080 16:07:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.080 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.338 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:18.596 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.597 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.597 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.597 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.597 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.854 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.113 16:07:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.372 rmmod nvme_tcp 00:08:19.372 rmmod nvme_fabrics 00:08:19.372 rmmod nvme_keyring 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 216233 ']' 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 216233 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 216233 ']' 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 216233 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 216233 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 216233' 00:08:19.372 killing process with pid 216233 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 216233 00:08:19.372 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 216233 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.630 16:07:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.538 16:07:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.538 00:08:21.538 real 0m6.598s 00:08:21.538 user 0m9.380s 00:08:21.538 sys 0m2.188s 00:08:21.538 16:07:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.538 16:07:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.538 ************************************ 00:08:21.538 END TEST nvmf_referrals 00:08:21.538 ************************************ 00:08:21.538 16:07:04 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.538 16:07:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:21.538 16:07:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.538 16:07:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.538 ************************************ 00:08:21.538 START TEST nvmf_connect_disconnect 00:08:21.538 ************************************ 00:08:21.538 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:21.796 * Looking for test storage... 00:08:21.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.796 16:07:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:23.702 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:23.702 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:23.702 Found net devices under 0000:84:00.0: cvl_0_0 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.702 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:23.703 Found net devices under 0000:84:00.1: cvl_0_1 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:08:23.703 00:08:23.703 --- 10.0.0.2 ping statistics --- 00:08:23.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.703 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:23.703 00:08:23.703 --- 10.0.0.1 ping statistics --- 00:08:23.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.703 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=218541 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 218541 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 218541 ']' 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:23.703 16:07:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.963 [2024-07-15 16:07:06.717686] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:23.963 [2024-07-15 16:07:06.717788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.963 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.963 [2024-07-15 16:07:06.795179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.963 [2024-07-15 16:07:06.893711] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.963 [2024-07-15 16:07:06.893780] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.963 [2024-07-15 16:07:06.893798] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.963 [2024-07-15 16:07:06.893811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.963 [2024-07-15 16:07:06.893823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.963 [2024-07-15 16:07:06.893879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.963 [2024-07-15 16:07:06.893912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.963 [2024-07-15 16:07:06.894031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.963 [2024-07-15 16:07:06.894034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 [2024-07-15 16:07:07.050522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:24.223 [2024-07-15 16:07:07.106046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:24.223 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:24.224 16:07:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:26.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:13.441 rmmod nvme_tcp 00:12:13.441 rmmod nvme_fabrics 00:12:13.441 rmmod nvme_keyring 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 218541 ']' 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 218541 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 218541 ']' 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 218541 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 218541 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 218541' 00:12:13.441 killing process with pid 218541 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 218541 00:12:13.441 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 218541 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.699 16:10:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.236 16:10:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.236 00:12:16.236 real 3m54.147s 00:12:16.236 user 14m52.254s 00:12:16.236 sys 0m34.483s 00:12:16.236 16:10:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:16.236 16:10:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.236 ************************************ 00:12:16.236 END TEST nvmf_connect_disconnect 00:12:16.236 ************************************ 00:12:16.236 16:10:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.236 16:10:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:16.236 16:10:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:16.236 16:10:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.236 ************************************ 00:12:16.236 START TEST nvmf_multitarget 00:12:16.236 ************************************ 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:16.236 * Looking for test storage... 00:12:16.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.236 16:10:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:18.138 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:18.138 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:18.138 Found net devices under 0000:84:00.0: cvl_0_0 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:18.138 Found net devices under 0000:84:00.1: cvl_0_1 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:12:18.138 00:12:18.138 --- 10.0.0.2 ping statistics --- 00:12:18.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.138 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:18.138 00:12:18.138 --- 10.0.0.1 ping statistics --- 00:12:18.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.138 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.138 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=249394 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 249394 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 249394 ']' 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.139 16:11:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.139 [2024-07-15 16:11:00.980056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:18.139 [2024-07-15 16:11:00.980139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.139 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.139 [2024-07-15 16:11:01.047537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.398 [2024-07-15 16:11:01.140872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.398 [2024-07-15 16:11:01.140927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.398 [2024-07-15 16:11:01.140943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.398 [2024-07-15 16:11:01.140956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.398 [2024-07-15 16:11:01.140967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.398 [2024-07-15 16:11:01.141036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.398 [2024-07-15 16:11:01.141072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.398 [2024-07-15 16:11:01.141187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.398 [2024-07-15 16:11:01.141190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:18.398 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:18.656 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:18.656 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:18.656 "nvmf_tgt_1" 00:12:18.656 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:18.656 "nvmf_tgt_2" 00:12:18.915 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:18.915 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:18.915 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:18.915 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:18.915 true 00:12:18.915 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:19.174 true 00:12:19.174 16:11:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.174 rmmod nvme_tcp 00:12:19.174 rmmod nvme_fabrics 00:12:19.174 rmmod nvme_keyring 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 249394 ']' 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 249394 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 249394 ']' 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 249394 00:12:19.174 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 249394 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 249394' 00:12:19.433 killing process with pid 249394 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 249394 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 249394 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.433 16:11:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.979 16:11:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.979 00:12:21.979 real 0m5.732s 00:12:21.979 user 0m6.457s 00:12:21.979 sys 0m1.925s 00:12:21.979 16:11:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.979 16:11:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:21.979 ************************************ 00:12:21.979 END TEST nvmf_multitarget 00:12:21.979 ************************************ 00:12:21.979 16:11:04 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:21.979 16:11:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:21.979 16:11:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.979 16:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.979 ************************************ 00:12:21.979 START TEST nvmf_rpc 00:12:21.979 ************************************ 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:21.979 * Looking for test storage... 00:12:21.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.979 16:11:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.980 16:11:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:23.887 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:23.887 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:23.887 Found net devices under 0000:84:00.0: cvl_0_0 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:23.887 Found net devices under 0000:84:00.1: cvl_0_1 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:12:23.887 00:12:23.887 --- 10.0.0.2 ping statistics --- 00:12:23.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.887 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:23.887 00:12:23.887 --- 10.0.0.1 ping statistics --- 00:12:23.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.887 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.887 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=251502 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 251502 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 251502 ']' 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.888 16:11:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.888 [2024-07-15 16:11:06.699123] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:23.888 [2024-07-15 16:11:06.699220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.888 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.888 [2024-07-15 16:11:06.783760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.146 [2024-07-15 16:11:06.887246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.146 [2024-07-15 16:11:06.887302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.146 [2024-07-15 16:11:06.887340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.146 [2024-07-15 16:11:06.887361] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.146 [2024-07-15 16:11:06.887379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.146 [2024-07-15 16:11:06.887468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.146 [2024-07-15 16:11:06.887502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.146 [2024-07-15 16:11:06.887563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.146 [2024-07-15 16:11:06.887571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:24.146 "tick_rate": 2700000000, 00:12:24.146 "poll_groups": [ 00:12:24.146 { 00:12:24.146 "name": "nvmf_tgt_poll_group_000", 00:12:24.146 "admin_qpairs": 0, 00:12:24.146 "io_qpairs": 0, 00:12:24.146 "current_admin_qpairs": 0, 00:12:24.146 "current_io_qpairs": 0, 00:12:24.146 "pending_bdev_io": 0, 00:12:24.146 "completed_nvme_io": 0, 00:12:24.146 "transports": [] 00:12:24.146 }, 00:12:24.146 { 00:12:24.146 "name": "nvmf_tgt_poll_group_001", 00:12:24.146 "admin_qpairs": 0, 00:12:24.146 "io_qpairs": 0, 00:12:24.146 "current_admin_qpairs": 0, 00:12:24.146 "current_io_qpairs": 0, 00:12:24.146 "pending_bdev_io": 0, 00:12:24.146 "completed_nvme_io": 0, 00:12:24.146 "transports": [] 00:12:24.146 }, 00:12:24.146 { 00:12:24.146 "name": "nvmf_tgt_poll_group_002", 00:12:24.146 "admin_qpairs": 0, 00:12:24.146 "io_qpairs": 0, 00:12:24.146 "current_admin_qpairs": 0, 00:12:24.146 "current_io_qpairs": 0, 00:12:24.146 "pending_bdev_io": 0, 00:12:24.146 "completed_nvme_io": 0, 00:12:24.146 "transports": [] 00:12:24.146 }, 00:12:24.146 { 00:12:24.146 "name": "nvmf_tgt_poll_group_003", 00:12:24.146 "admin_qpairs": 0, 00:12:24.146 "io_qpairs": 0, 00:12:24.146 "current_admin_qpairs": 0, 00:12:24.146 "current_io_qpairs": 0, 00:12:24.146 "pending_bdev_io": 0, 00:12:24.146 "completed_nvme_io": 0, 00:12:24.146 "transports": [] 00:12:24.146 } 00:12:24.146 ] 00:12:24.146 }' 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:24.146 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.404 [2024-07-15 16:11:07.150888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:24.404 "tick_rate": 2700000000, 00:12:24.404 "poll_groups": [ 00:12:24.404 { 00:12:24.404 "name": "nvmf_tgt_poll_group_000", 00:12:24.404 "admin_qpairs": 0, 00:12:24.404 "io_qpairs": 0, 00:12:24.404 "current_admin_qpairs": 0, 00:12:24.404 "current_io_qpairs": 0, 00:12:24.404 "pending_bdev_io": 0, 00:12:24.404 "completed_nvme_io": 0, 00:12:24.404 "transports": [ 00:12:24.404 { 00:12:24.404 "trtype": "TCP" 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "nvmf_tgt_poll_group_001", 00:12:24.404 "admin_qpairs": 0, 00:12:24.404 "io_qpairs": 0, 00:12:24.404 "current_admin_qpairs": 0, 00:12:24.404 "current_io_qpairs": 0, 00:12:24.404 "pending_bdev_io": 0, 00:12:24.404 "completed_nvme_io": 0, 00:12:24.404 "transports": [ 00:12:24.404 { 00:12:24.404 "trtype": "TCP" 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "nvmf_tgt_poll_group_002", 00:12:24.404 "admin_qpairs": 0, 00:12:24.404 "io_qpairs": 0, 00:12:24.404 "current_admin_qpairs": 0, 00:12:24.404 "current_io_qpairs": 0, 00:12:24.404 "pending_bdev_io": 0, 00:12:24.404 "completed_nvme_io": 0, 00:12:24.404 "transports": [ 00:12:24.404 { 00:12:24.404 "trtype": "TCP" 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 }, 00:12:24.404 { 00:12:24.404 "name": "nvmf_tgt_poll_group_003", 00:12:24.404 "admin_qpairs": 0, 00:12:24.404 "io_qpairs": 0, 00:12:24.404 "current_admin_qpairs": 0, 00:12:24.404 "current_io_qpairs": 0, 00:12:24.404 "pending_bdev_io": 0, 00:12:24.404 "completed_nvme_io": 0, 00:12:24.404 "transports": [ 00:12:24.404 { 00:12:24.404 "trtype": "TCP" 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 } 00:12:24.404 ] 00:12:24.404 }' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.404 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.404 Malloc1 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 [2024-07-15 16:11:07.312466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:24.405 [2024-07-15 16:11:07.335094] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:24.405 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:24.405 could not add new controller: failed to write to nvme-fabrics device 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.334 16:11:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.334 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:25.334 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.334 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:25.334 16:11:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:27.232 16:11:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.232 [2024-07-15 16:11:10.107822] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:27.232 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:27.232 could not add new controller: failed to write to nvme-fabrics device 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.232 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.799 16:11:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.799 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:27.799 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.799 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:27.799 16:11:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:30.375 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:30.375 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:30.375 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.375 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:30.375 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 [2024-07-15 16:11:12.890324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.376 16:11:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.637 16:11:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.637 16:11:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:30.637 16:11:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.637 16:11:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:30.637 16:11:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:32.540 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 [2024-07-15 16:11:15.650436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.800 16:11:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.368 16:11:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.368 16:11:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.368 16:11:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.368 16:11:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.368 16:11:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:35.905 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 [2024-07-15 16:11:18.411192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.906 16:11:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.165 16:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.165 16:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:36.165 16:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.165 16:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:36.165 16:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 [2024-07-15 16:11:21.215370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.699 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.956 16:11:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.956 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.956 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.956 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.956 16:11:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.491 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.491 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.491 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.491 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.491 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.492 16:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.492 16:11:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 [2024-07-15 16:11:24.066835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.492 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.060 16:11:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.060 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:42.060 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.060 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:42.060 16:11:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 [2024-07-15 16:11:26.882303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.961 [2024-07-15 16:11:26.930381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.961 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 [2024-07-15 16:11:26.978534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 [2024-07-15 16:11:27.026695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 [2024-07-15 16:11:27.074895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:44.219 "tick_rate": 2700000000, 00:12:44.219 "poll_groups": [ 00:12:44.219 { 00:12:44.219 "name": "nvmf_tgt_poll_group_000", 00:12:44.219 "admin_qpairs": 2, 00:12:44.219 "io_qpairs": 84, 00:12:44.219 "current_admin_qpairs": 0, 00:12:44.219 "current_io_qpairs": 0, 00:12:44.219 "pending_bdev_io": 0, 00:12:44.219 "completed_nvme_io": 198, 00:12:44.219 "transports": [ 00:12:44.219 { 00:12:44.219 "trtype": "TCP" 00:12:44.219 } 00:12:44.219 ] 00:12:44.219 }, 00:12:44.219 { 00:12:44.219 "name": "nvmf_tgt_poll_group_001", 00:12:44.219 "admin_qpairs": 2, 00:12:44.219 "io_qpairs": 84, 00:12:44.219 "current_admin_qpairs": 0, 00:12:44.219 "current_io_qpairs": 0, 00:12:44.219 "pending_bdev_io": 0, 00:12:44.219 "completed_nvme_io": 168, 00:12:44.219 "transports": [ 00:12:44.219 { 00:12:44.219 "trtype": "TCP" 00:12:44.219 } 00:12:44.219 ] 00:12:44.219 }, 00:12:44.219 { 00:12:44.219 "name": "nvmf_tgt_poll_group_002", 00:12:44.219 "admin_qpairs": 1, 00:12:44.219 "io_qpairs": 84, 00:12:44.219 "current_admin_qpairs": 0, 00:12:44.219 "current_io_qpairs": 0, 00:12:44.219 "pending_bdev_io": 0, 00:12:44.219 "completed_nvme_io": 136, 00:12:44.219 "transports": [ 00:12:44.219 { 00:12:44.219 "trtype": "TCP" 00:12:44.219 } 00:12:44.219 ] 00:12:44.219 }, 00:12:44.219 { 00:12:44.219 "name": "nvmf_tgt_poll_group_003", 00:12:44.219 "admin_qpairs": 2, 00:12:44.219 "io_qpairs": 84, 00:12:44.219 "current_admin_qpairs": 0, 00:12:44.219 "current_io_qpairs": 0, 00:12:44.219 "pending_bdev_io": 0, 00:12:44.219 "completed_nvme_io": 184, 00:12:44.219 "transports": [ 00:12:44.219 { 00:12:44.219 "trtype": "TCP" 00:12:44.219 } 00:12:44.219 ] 00:12:44.219 } 00:12:44.219 ] 00:12:44.219 }' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:44.219 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.220 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.478 rmmod nvme_tcp 00:12:44.478 rmmod nvme_fabrics 00:12:44.478 rmmod nvme_keyring 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 251502 ']' 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 251502 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 251502 ']' 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 251502 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 251502 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 251502' 00:12:44.478 killing process with pid 251502 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 251502 00:12:44.478 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 251502 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.736 16:11:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.640 16:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.640 00:12:46.640 real 0m25.079s 00:12:46.640 user 1m21.916s 00:12:46.640 sys 0m4.013s 00:12:46.640 16:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:46.640 16:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.640 ************************************ 00:12:46.640 END TEST nvmf_rpc 00:12:46.640 ************************************ 00:12:46.640 16:11:29 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.640 16:11:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:46.640 16:11:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:46.640 16:11:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.898 ************************************ 00:12:46.898 START TEST nvmf_invalid 00:12:46.898 ************************************ 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.898 * Looking for test storage... 00:12:46.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.898 16:11:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:48.802 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:48.802 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.802 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:48.803 Found net devices under 0000:84:00.0: cvl_0_0 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:48.803 Found net devices under 0000:84:00.1: cvl_0_1 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.803 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:12:49.062 00:12:49.062 --- 10.0.0.2 ping statistics --- 00:12:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.062 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:49.062 00:12:49.062 --- 10.0.0.1 ping statistics --- 00:12:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.062 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=256004 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 256004 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 256004 ']' 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.062 16:11:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.062 [2024-07-15 16:11:31.874286] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:49.062 [2024-07-15 16:11:31.874367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.062 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.062 [2024-07-15 16:11:31.944755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.062 [2024-07-15 16:11:32.038550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.062 [2024-07-15 16:11:32.038600] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.062 [2024-07-15 16:11:32.038616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.062 [2024-07-15 16:11:32.038629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.062 [2024-07-15 16:11:32.038641] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.062 [2024-07-15 16:11:32.038756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.062 [2024-07-15 16:11:32.038832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.062 [2024-07-15 16:11:32.038811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.062 [2024-07-15 16:11:32.038835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.321 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9872 00:12:49.579 [2024-07-15 16:11:32.419044] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:49.579 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:49.579 { 00:12:49.579 "nqn": "nqn.2016-06.io.spdk:cnode9872", 00:12:49.579 "tgt_name": "foobar", 00:12:49.579 "method": "nvmf_create_subsystem", 00:12:49.579 "req_id": 1 00:12:49.579 } 00:12:49.579 Got JSON-RPC error response 00:12:49.579 response: 00:12:49.579 { 00:12:49.579 "code": -32603, 00:12:49.579 "message": "Unable to find target foobar" 00:12:49.579 }' 00:12:49.579 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:49.579 { 00:12:49.579 "nqn": "nqn.2016-06.io.spdk:cnode9872", 00:12:49.579 "tgt_name": "foobar", 00:12:49.579 "method": "nvmf_create_subsystem", 00:12:49.579 "req_id": 1 00:12:49.579 } 00:12:49.579 Got JSON-RPC error response 00:12:49.579 response: 00:12:49.579 { 00:12:49.579 "code": -32603, 00:12:49.579 "message": "Unable to find target foobar" 00:12:49.579 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:49.579 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:49.579 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2292 00:12:49.838 [2024-07-15 16:11:32.671884] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2292: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:49.838 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:49.838 { 00:12:49.838 "nqn": "nqn.2016-06.io.spdk:cnode2292", 00:12:49.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.838 "method": "nvmf_create_subsystem", 00:12:49.838 "req_id": 1 00:12:49.838 } 00:12:49.838 Got JSON-RPC error response 00:12:49.838 response: 00:12:49.838 { 00:12:49.838 "code": -32602, 00:12:49.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.838 }' 00:12:49.838 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:49.838 { 00:12:49.838 "nqn": "nqn.2016-06.io.spdk:cnode2292", 00:12:49.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.838 "method": "nvmf_create_subsystem", 00:12:49.838 "req_id": 1 00:12:49.838 } 00:12:49.838 Got JSON-RPC error response 00:12:49.838 response: 00:12:49.838 { 00:12:49.838 "code": -32602, 00:12:49.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.838 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:49.838 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:49.838 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28556 00:12:50.096 [2024-07-15 16:11:32.912635] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28556: invalid model number 'SPDK_Controller' 00:12:50.096 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:50.096 { 00:12:50.096 "nqn": "nqn.2016-06.io.spdk:cnode28556", 00:12:50.096 "model_number": "SPDK_Controller\u001f", 00:12:50.096 "method": "nvmf_create_subsystem", 00:12:50.096 "req_id": 1 00:12:50.096 } 00:12:50.096 Got JSON-RPC error response 00:12:50.096 response: 00:12:50.096 { 00:12:50.096 "code": -32602, 00:12:50.096 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.096 }' 00:12:50.096 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:50.096 { 00:12:50.096 "nqn": "nqn.2016-06.io.spdk:cnode28556", 00:12:50.096 "model_number": "SPDK_Controller\u001f", 00:12:50.096 "method": "nvmf_create_subsystem", 00:12:50.096 "req_id": 1 00:12:50.096 } 00:12:50.096 Got JSON-RPC error response 00:12:50.096 response: 00:12:50.096 { 00:12:50.096 "code": -32602, 00:12:50.096 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.096 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.096 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:50.096 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zqz1x6"JOF*9=Cj/Q<_UE' 00:12:50.097 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'zqz1x6"JOF*9=Cj/Q<_UE' nqn.2016-06.io.spdk:cnode2016 00:12:50.356 [2024-07-15 16:11:33.249791] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2016: invalid serial number 'zqz1x6"JOF*9=Cj/Q<_UE' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:50.356 { 00:12:50.356 "nqn": "nqn.2016-06.io.spdk:cnode2016", 00:12:50.356 "serial_number": "zqz1x6\"JOF*9=Cj/Q<_UE", 00:12:50.356 "method": "nvmf_create_subsystem", 00:12:50.356 "req_id": 1 00:12:50.356 } 00:12:50.356 Got JSON-RPC error response 00:12:50.356 response: 00:12:50.356 { 00:12:50.356 "code": -32602, 00:12:50.356 "message": "Invalid SN zqz1x6\"JOF*9=Cj/Q<_UE" 00:12:50.356 }' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:50.356 { 00:12:50.356 "nqn": "nqn.2016-06.io.spdk:cnode2016", 00:12:50.356 "serial_number": "zqz1x6\"JOF*9=Cj/Q<_UE", 00:12:50.356 "method": "nvmf_create_subsystem", 00:12:50.356 "req_id": 1 00:12:50.356 } 00:12:50.356 Got JSON-RPC error response 00:12:50.356 response: 00:12:50.356 { 00:12:50.356 "code": -32602, 00:12:50.356 "message": "Invalid SN zqz1x6\"JOF*9=Cj/Q<_UE" 00:12:50.356 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.356 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:50.357 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.615 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '$9sZ&HE#2rSd_Au^w1{+.-PrVT=;y%Jns9["RC_' 00:12:50.616 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '$9sZ&HE#2rSd_Au^w1{+.-PrVT=;y%Jns9["RC_' nqn.2016-06.io.spdk:cnode31781 00:12:50.874 [2024-07-15 16:11:33.626968] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31781: invalid model number '$9sZ&HE#2rSd_Au^w1{+.-PrVT=;y%Jns9["RC_' 00:12:50.874 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:50.874 { 00:12:50.874 "nqn": "nqn.2016-06.io.spdk:cnode31781", 00:12:50.874 "model_number": "\u007f$9sZ&HE#2rSd\u007f_Au^w1{+.-PrVT=;y%Jns9[\"RC_", 00:12:50.874 "method": "nvmf_create_subsystem", 00:12:50.874 "req_id": 1 00:12:50.874 } 00:12:50.874 Got JSON-RPC error response 00:12:50.874 response: 00:12:50.874 { 00:12:50.874 "code": -32602, 00:12:50.874 "message": "Invalid MN \u007f$9sZ&HE#2rSd\u007f_Au^w1{+.-PrVT=;y%Jns9[\"RC_" 00:12:50.874 }' 00:12:50.874 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:50.874 { 00:12:50.874 "nqn": "nqn.2016-06.io.spdk:cnode31781", 00:12:50.874 "model_number": "\u007f$9sZ&HE#2rSd\u007f_Au^w1{+.-PrVT=;y%Jns9[\"RC_", 00:12:50.874 "method": "nvmf_create_subsystem", 00:12:50.874 "req_id": 1 00:12:50.874 } 00:12:50.874 Got JSON-RPC error response 00:12:50.874 response: 00:12:50.874 { 00:12:50.874 "code": -32602, 00:12:50.874 "message": "Invalid MN \u007f$9sZ&HE#2rSd\u007f_Au^w1{+.-PrVT=;y%Jns9[\"RC_" 00:12:50.874 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.874 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:51.132 [2024-07-15 16:11:33.875852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.132 16:11:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:51.390 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:51.390 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:51.390 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:51.390 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:51.390 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:51.648 [2024-07-15 16:11:34.373484] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:51.648 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:51.648 { 00:12:51.648 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.648 "listen_address": { 00:12:51.648 "trtype": "tcp", 00:12:51.648 "traddr": "", 00:12:51.648 "trsvcid": "4421" 00:12:51.648 }, 00:12:51.648 "method": "nvmf_subsystem_remove_listener", 00:12:51.648 "req_id": 1 00:12:51.648 } 00:12:51.648 Got JSON-RPC error response 00:12:51.648 response: 00:12:51.648 { 00:12:51.648 "code": -32602, 00:12:51.648 "message": "Invalid parameters" 00:12:51.648 }' 00:12:51.648 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:51.648 { 00:12:51.648 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.648 "listen_address": { 00:12:51.648 "trtype": "tcp", 00:12:51.648 "traddr": "", 00:12:51.648 "trsvcid": "4421" 00:12:51.648 }, 00:12:51.648 "method": "nvmf_subsystem_remove_listener", 00:12:51.648 "req_id": 1 00:12:51.648 } 00:12:51.648 Got JSON-RPC error response 00:12:51.648 response: 00:12:51.648 { 00:12:51.648 "code": -32602, 00:12:51.648 "message": "Invalid parameters" 00:12:51.648 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:51.648 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26496 -i 0 00:12:51.907 [2024-07-15 16:11:34.630291] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26496: invalid cntlid range [0-65519] 00:12:51.907 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:51.907 { 00:12:51.907 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:12:51.907 "min_cntlid": 0, 00:12:51.907 "method": "nvmf_create_subsystem", 00:12:51.907 "req_id": 1 00:12:51.907 } 00:12:51.907 Got JSON-RPC error response 00:12:51.907 response: 00:12:51.907 { 00:12:51.907 "code": -32602, 00:12:51.907 "message": "Invalid cntlid range [0-65519]" 00:12:51.907 }' 00:12:51.907 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:51.907 { 00:12:51.907 "nqn": "nqn.2016-06.io.spdk:cnode26496", 00:12:51.907 "min_cntlid": 0, 00:12:51.907 "method": "nvmf_create_subsystem", 00:12:51.907 "req_id": 1 00:12:51.907 } 00:12:51.907 Got JSON-RPC error response 00:12:51.907 response: 00:12:51.907 { 00:12:51.907 "code": -32602, 00:12:51.907 "message": "Invalid cntlid range [0-65519]" 00:12:51.907 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.907 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7392 -i 65520 00:12:51.907 [2024-07-15 16:11:34.871115] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7392: invalid cntlid range [65520-65519] 00:12:52.166 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:52.166 { 00:12:52.166 "nqn": "nqn.2016-06.io.spdk:cnode7392", 00:12:52.166 "min_cntlid": 65520, 00:12:52.166 "method": "nvmf_create_subsystem", 00:12:52.166 "req_id": 1 00:12:52.166 } 00:12:52.166 Got JSON-RPC error response 00:12:52.166 response: 00:12:52.166 { 00:12:52.166 "code": -32602, 00:12:52.166 "message": "Invalid cntlid range [65520-65519]" 00:12:52.166 }' 00:12:52.166 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:52.166 { 00:12:52.166 "nqn": "nqn.2016-06.io.spdk:cnode7392", 00:12:52.166 "min_cntlid": 65520, 00:12:52.166 "method": "nvmf_create_subsystem", 00:12:52.166 "req_id": 1 00:12:52.166 } 00:12:52.166 Got JSON-RPC error response 00:12:52.166 response: 00:12:52.166 { 00:12:52.166 "code": -32602, 00:12:52.166 "message": "Invalid cntlid range [65520-65519]" 00:12:52.166 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.166 16:11:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10849 -I 0 00:12:52.166 [2024-07-15 16:11:35.115893] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10849: invalid cntlid range [1-0] 00:12:52.166 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:52.166 { 00:12:52.166 "nqn": "nqn.2016-06.io.spdk:cnode10849", 00:12:52.166 "max_cntlid": 0, 00:12:52.166 "method": "nvmf_create_subsystem", 00:12:52.166 "req_id": 1 00:12:52.166 } 00:12:52.166 Got JSON-RPC error response 00:12:52.166 response: 00:12:52.166 { 00:12:52.166 "code": -32602, 00:12:52.166 "message": "Invalid cntlid range [1-0]" 00:12:52.166 }' 00:12:52.166 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:52.166 { 00:12:52.166 "nqn": "nqn.2016-06.io.spdk:cnode10849", 00:12:52.166 "max_cntlid": 0, 00:12:52.167 "method": "nvmf_create_subsystem", 00:12:52.167 "req_id": 1 00:12:52.167 } 00:12:52.167 Got JSON-RPC error response 00:12:52.167 response: 00:12:52.167 { 00:12:52.167 "code": -32602, 00:12:52.167 "message": "Invalid cntlid range [1-0]" 00:12:52.167 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.167 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17554 -I 65520 00:12:52.426 [2024-07-15 16:11:35.368760] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17554: invalid cntlid range [1-65520] 00:12:52.426 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:52.426 { 00:12:52.426 "nqn": "nqn.2016-06.io.spdk:cnode17554", 00:12:52.426 "max_cntlid": 65520, 00:12:52.426 "method": "nvmf_create_subsystem", 00:12:52.426 "req_id": 1 00:12:52.426 } 00:12:52.426 Got JSON-RPC error response 00:12:52.426 response: 00:12:52.426 { 00:12:52.426 "code": -32602, 00:12:52.426 "message": "Invalid cntlid range [1-65520]" 00:12:52.426 }' 00:12:52.426 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:52.426 { 00:12:52.426 "nqn": "nqn.2016-06.io.spdk:cnode17554", 00:12:52.426 "max_cntlid": 65520, 00:12:52.426 "method": "nvmf_create_subsystem", 00:12:52.426 "req_id": 1 00:12:52.426 } 00:12:52.426 Got JSON-RPC error response 00:12:52.426 response: 00:12:52.426 { 00:12:52.426 "code": -32602, 00:12:52.426 "message": "Invalid cntlid range [1-65520]" 00:12:52.426 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.426 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3665 -i 6 -I 5 00:12:52.684 [2024-07-15 16:11:35.617587] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3665: invalid cntlid range [6-5] 00:12:52.684 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:52.684 { 00:12:52.684 "nqn": "nqn.2016-06.io.spdk:cnode3665", 00:12:52.684 "min_cntlid": 6, 00:12:52.684 "max_cntlid": 5, 00:12:52.684 "method": "nvmf_create_subsystem", 00:12:52.684 "req_id": 1 00:12:52.684 } 00:12:52.684 Got JSON-RPC error response 00:12:52.684 response: 00:12:52.684 { 00:12:52.684 "code": -32602, 00:12:52.684 "message": "Invalid cntlid range [6-5]" 00:12:52.684 }' 00:12:52.684 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:52.684 { 00:12:52.684 "nqn": "nqn.2016-06.io.spdk:cnode3665", 00:12:52.684 "min_cntlid": 6, 00:12:52.684 "max_cntlid": 5, 00:12:52.684 "method": "nvmf_create_subsystem", 00:12:52.684 "req_id": 1 00:12:52.684 } 00:12:52.684 Got JSON-RPC error response 00:12:52.684 response: 00:12:52.684 { 00:12:52.684 "code": -32602, 00:12:52.684 "message": "Invalid cntlid range [6-5]" 00:12:52.684 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.684 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:52.944 { 00:12:52.944 "name": "foobar", 00:12:52.944 "method": "nvmf_delete_target", 00:12:52.944 "req_id": 1 00:12:52.944 } 00:12:52.944 Got JSON-RPC error response 00:12:52.944 response: 00:12:52.944 { 00:12:52.944 "code": -32602, 00:12:52.944 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:52.944 }' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:52.944 { 00:12:52.944 "name": "foobar", 00:12:52.944 "method": "nvmf_delete_target", 00:12:52.944 "req_id": 1 00:12:52.944 } 00:12:52.944 Got JSON-RPC error response 00:12:52.944 response: 00:12:52.944 { 00:12:52.944 "code": -32602, 00:12:52.944 "message": "The specified target doesn't exist, cannot delete it." 00:12:52.944 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.944 rmmod nvme_tcp 00:12:52.944 rmmod nvme_fabrics 00:12:52.944 rmmod nvme_keyring 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 256004 ']' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 256004 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 256004 ']' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 256004 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 256004 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 256004' 00:12:52.944 killing process with pid 256004 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 256004 00:12:52.944 16:11:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 256004 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.204 16:11:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.742 16:11:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.742 00:12:55.742 real 0m8.508s 00:12:55.742 user 0m19.693s 00:12:55.742 sys 0m2.377s 00:12:55.742 16:11:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.742 16:11:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.742 ************************************ 00:12:55.742 END TEST nvmf_invalid 00:12:55.742 ************************************ 00:12:55.742 16:11:38 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:55.742 16:11:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:55.742 16:11:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.742 16:11:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.742 ************************************ 00:12:55.742 START TEST nvmf_abort 00:12:55.742 ************************************ 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:55.742 * Looking for test storage... 00:12:55.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.742 16:11:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:57.676 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:57.676 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:57.676 Found net devices under 0000:84:00.0: cvl_0_0 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:57.676 Found net devices under 0000:84:00.1: cvl_0_1 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:57.676 00:12:57.676 --- 10.0.0.2 ping statistics --- 00:12:57.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.676 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:12:57.676 00:12:57.676 --- 10.0.0.1 ping statistics --- 00:12:57.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.676 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.676 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=258654 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 258654 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 258654 ']' 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.677 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.677 [2024-07-15 16:11:40.588696] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:57.677 [2024-07-15 16:11:40.588803] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.935 [2024-07-15 16:11:40.656547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.935 [2024-07-15 16:11:40.742960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.935 [2024-07-15 16:11:40.743031] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.935 [2024-07-15 16:11:40.743044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.935 [2024-07-15 16:11:40.743055] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.935 [2024-07-15 16:11:40.743064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.935 [2024-07-15 16:11:40.743194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.935 [2024-07-15 16:11:40.743222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.935 [2024-07-15 16:11:40.743225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.935 [2024-07-15 16:11:40.873481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:57.935 Malloc0 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.935 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.193 Delay0 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.193 [2024-07-15 16:11:40.936224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.193 16:11:40 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:58.193 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.193 [2024-07-15 16:11:41.031666] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:00.727 Initializing NVMe Controllers 00:13:00.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:00.727 controller IO queue size 128 less than required 00:13:00.727 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:00.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:00.727 Initialization complete. Launching workers. 00:13:00.727 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33553 00:13:00.727 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33614, failed to submit 62 00:13:00.727 success 33557, unsuccess 57, failed 0 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:00.727 rmmod nvme_tcp 00:13:00.727 rmmod nvme_fabrics 00:13:00.727 rmmod nvme_keyring 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 258654 ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 258654 ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 258654' 00:13:00.727 killing process with pid 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 258654 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.727 16:11:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.631 16:11:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.631 00:13:02.631 real 0m7.412s 00:13:02.631 user 0m10.607s 00:13:02.631 sys 0m2.724s 00:13:02.631 16:11:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.631 16:11:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:02.631 ************************************ 00:13:02.631 END TEST nvmf_abort 00:13:02.631 ************************************ 00:13:02.889 16:11:45 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:02.889 16:11:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:02.889 16:11:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.889 16:11:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.889 ************************************ 00:13:02.889 START TEST nvmf_ns_hotplug_stress 00:13:02.889 ************************************ 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:02.889 * Looking for test storage... 00:13:02.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.889 16:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.787 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:04.788 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:04.788 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:04.788 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:05.046 Found net devices under 0000:84:00.0: cvl_0_0 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:05.046 Found net devices under 0000:84:00.1: cvl_0_1 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:05.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:13:05.046 00:13:05.046 --- 10.0.0.2 ping statistics --- 00:13:05.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.046 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:05.046 00:13:05.046 --- 10.0.0.1 ping statistics --- 00:13:05.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.046 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=260910 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.046 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 260910 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 260910 ']' 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.047 16:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.047 [2024-07-15 16:11:47.986054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:05.047 [2024-07-15 16:11:47.986142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.047 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.304 [2024-07-15 16:11:48.059848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.304 [2024-07-15 16:11:48.151126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.304 [2024-07-15 16:11:48.151188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.304 [2024-07-15 16:11:48.151204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.304 [2024-07-15 16:11:48.151217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.304 [2024-07-15 16:11:48.151228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.304 [2024-07-15 16:11:48.151333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.304 [2024-07-15 16:11:48.151362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.304 [2024-07-15 16:11:48.151364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.304 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.562 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:05.562 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.562 [2024-07-15 16:11:48.511842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.562 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.820 16:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.078 [2024-07-15 16:11:48.990506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.078 16:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.336 16:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:06.594 Malloc0 00:13:06.594 16:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:06.851 Delay0 00:13:06.851 16:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.108 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:07.367 NULL1 00:13:07.367 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:07.625 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=261310 00:13:07.625 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:07.625 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:07.625 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.625 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.882 16:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.139 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:08.139 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:08.396 true 00:13:08.396 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:08.396 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.654 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.912 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:08.912 16:11:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:09.170 true 00:13:09.170 16:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:09.170 16:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.106 Read completed with error (sct=0, sc=11) 00:13:10.106 16:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.364 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:10.364 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:10.621 true 00:13:10.621 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:10.621 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.879 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.136 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:11.136 16:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:11.394 true 00:13:11.394 16:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:11.394 16:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.328 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.328 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:12.328 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:12.586 true 00:13:12.586 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:12.586 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.843 16:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.100 16:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:13.100 16:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:13.359 true 00:13:13.359 16:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:13.359 16:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.296 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.554 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:14.554 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:14.811 true 00:13:14.811 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:14.811 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.069 16:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.326 16:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:15.326 16:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:15.583 true 00:13:15.583 16:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:15.583 16:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.519 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.519 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:16.519 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:16.776 true 00:13:16.776 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:16.776 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.034 16:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.291 16:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:17.291 16:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:17.549 true 00:13:17.549 16:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:17.549 16:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.480 16:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.737 16:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:18.737 16:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:18.995 true 00:13:18.995 16:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:18.995 16:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.252 16:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.508 16:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:19.508 16:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:19.765 true 00:13:19.765 16:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:19.765 16:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.694 16:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.694 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.950 16:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:20.950 16:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:21.207 true 00:13:21.207 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:21.207 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.464 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.748 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:21.748 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:22.006 true 00:13:22.006 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:22.006 16:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.936 16:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.192 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:23.192 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:23.449 true 00:13:23.449 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:23.449 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.707 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.965 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:23.965 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:24.223 true 00:13:24.223 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:24.223 16:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.155 16:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.155 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:25.155 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:25.413 true 00:13:25.413 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:25.413 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.670 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.928 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:25.928 16:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:26.187 true 00:13:26.187 16:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:26.187 16:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.121 16:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.379 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:27.379 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:27.637 true 00:13:27.637 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:27.637 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.895 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.152 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:28.152 16:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:28.409 true 00:13:28.409 16:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:28.409 16:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.342 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.600 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:29.600 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:29.857 true 00:13:29.857 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:29.857 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.115 16:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.382 16:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:30.382 16:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:30.642 true 00:13:30.642 16:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:30.642 16:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.575 16:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.832 16:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:31.832 16:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:32.090 true 00:13:32.090 16:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:32.090 16:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.348 16:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.605 16:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:32.605 16:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:32.605 true 00:13:32.605 16:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:32.605 16:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.977 16:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.977 16:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:33.977 16:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:34.234 true 00:13:34.234 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:34.234 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.491 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.749 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:34.749 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:35.006 true 00:13:35.006 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:35.006 16:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.938 16:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.938 16:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:35.938 16:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:36.195 true 00:13:36.195 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:36.195 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.452 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.709 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:36.709 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:36.966 true 00:13:36.966 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:36.966 16:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.899 16:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.899 16:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:37.899 16:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:38.155 Initializing NVMe Controllers 00:13:38.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.155 Controller IO queue size 128, less than required. 00:13:38.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.155 Controller IO queue size 128, less than required. 00:13:38.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:38.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:38.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:38.155 Initialization complete. Launching workers. 00:13:38.155 ======================================================== 00:13:38.155 Latency(us) 00:13:38.155 Device Information : IOPS MiB/s Average min max 00:13:38.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 764.75 0.37 87695.33 2296.20 1013680.83 00:13:38.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11141.86 5.44 11454.38 2862.91 448231.76 00:13:38.155 ======================================================== 00:13:38.155 Total : 11906.61 5.81 16351.26 2296.20 1013680.83 00:13:38.155 00:13:38.155 true 00:13:38.155 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 261310 00:13:38.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (261310) - No such process 00:13:38.155 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 261310 00:13:38.155 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.412 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.669 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:38.669 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:38.669 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:38.669 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.669 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:38.927 null0 00:13:38.927 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.927 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.927 16:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:39.184 null1 00:13:39.184 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.184 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.184 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:39.440 null2 00:13:39.440 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.440 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.440 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:39.698 null3 00:13:39.698 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.698 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.698 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:39.956 null4 00:13:39.956 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.956 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.956 16:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:40.213 null5 00:13:40.213 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.213 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.213 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:40.470 null6 00:13:40.470 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.470 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.470 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:40.727 null7 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.727 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 265841 265842 265844 265846 265848 265850 265852 265854 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.728 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.985 16:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.242 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.499 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.756 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.013 16:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.271 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.528 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.785 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.043 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.043 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.043 16:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.043 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.043 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.043 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.043 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.043 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.307 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.636 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.901 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.902 16:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.160 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.418 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.677 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.936 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.201 16:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.201 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.201 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.201 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.459 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.717 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.976 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.234 rmmod nvme_tcp 00:13:46.234 rmmod nvme_fabrics 00:13:46.234 rmmod nvme_keyring 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 260910 ']' 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 260910 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 260910 ']' 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 260910 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 260910 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 260910' 00:13:46.234 killing process with pid 260910 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 260910 00:13:46.234 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 260910 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.492 16:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.398 16:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.398 00:13:48.398 real 0m45.729s 00:13:48.398 user 3m28.995s 00:13:48.398 sys 0m16.417s 00:13:48.398 16:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.398 16:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.398 ************************************ 00:13:48.398 END TEST nvmf_ns_hotplug_stress 00:13:48.398 ************************************ 00:13:48.656 16:12:31 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.656 16:12:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.656 16:12:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.656 16:12:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.656 ************************************ 00:13:48.656 START TEST nvmf_connect_stress 00:13:48.656 ************************************ 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:48.656 * Looking for test storage... 00:13:48.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.656 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.657 16:12:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.563 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:50.564 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:50.564 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:50.564 Found net devices under 0000:84:00.0: cvl_0_0 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:50.564 Found net devices under 0000:84:00.1: cvl_0_1 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:50.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:50.564 00:13:50.564 --- 10.0.0.2 ping statistics --- 00:13:50.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.564 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:13:50.564 00:13:50.564 --- 10.0.0.1 ping statistics --- 00:13:50.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.564 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=268616 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 268616 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 268616 ']' 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.564 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.564 [2024-07-15 16:12:33.499252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:50.564 [2024-07-15 16:12:33.499333] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.564 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.822 [2024-07-15 16:12:33.569613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:50.822 [2024-07-15 16:12:33.660132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.822 [2024-07-15 16:12:33.660194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.822 [2024-07-15 16:12:33.660210] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.822 [2024-07-15 16:12:33.660223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.822 [2024-07-15 16:12:33.660236] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.822 [2024-07-15 16:12:33.660299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.822 [2024-07-15 16:12:33.660414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:50.822 [2024-07-15 16:12:33.660417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.822 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.822 [2024-07-15 16:12:33.797941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 [2024-07-15 16:12:33.826907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.081 NULL1 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=268748 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.081 16:12:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.339 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.339 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:51.339 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.339 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.339 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.596 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.596 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:51.596 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.596 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.596 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.163 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.163 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:52.163 16:12:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.163 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.163 16:12:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.422 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.422 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:52.422 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.422 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.422 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.680 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.680 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:52.680 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.680 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.680 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.938 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.938 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:52.938 16:12:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.938 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.938 16:12:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.197 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.197 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:53.197 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.197 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.197 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.766 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.766 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:53.766 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.766 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.766 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.024 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.024 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:54.024 16:12:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.024 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.024 16:12:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.281 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.281 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:54.282 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.282 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.282 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.541 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.541 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:54.541 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.541 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.541 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.798 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.798 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:54.798 16:12:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.798 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.798 16:12:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.365 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.365 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:55.365 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.365 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.365 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.622 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.622 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:55.622 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.622 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.622 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.881 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.881 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:55.881 16:12:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.881 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.881 16:12:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.140 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.140 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:56.140 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.140 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.140 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.398 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.398 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:56.398 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.398 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.398 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.965 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.965 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:56.965 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.965 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.965 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.222 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.222 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:57.222 16:12:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.222 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.222 16:12:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.481 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.481 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:57.481 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.481 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.481 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.740 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.740 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:57.740 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.740 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.740 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.999 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.999 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:57.999 16:12:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.999 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.999 16:12:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.564 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.564 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:58.564 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.564 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.564 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.821 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.821 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:58.821 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.821 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.821 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.079 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.079 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:59.079 16:12:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.079 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.079 16:12:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.338 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.338 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:59.338 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.338 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.338 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.598 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.598 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:13:59.598 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.598 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.598 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.165 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.165 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:14:00.165 16:12:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.165 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.165 16:12:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.423 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.423 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:14:00.423 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.423 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.423 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.682 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.682 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:14:00.682 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.682 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.682 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.939 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.939 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:14:00.939 16:12:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.939 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.939 16:12:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.196 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.196 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.196 16:12:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 268748 00:14:01.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (268748) - No such process 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 268748 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.197 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.197 rmmod nvme_tcp 00:14:01.456 rmmod nvme_fabrics 00:14:01.456 rmmod nvme_keyring 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 268616 ']' 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 268616 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 268616 ']' 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 268616 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 268616 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 268616' 00:14:01.456 killing process with pid 268616 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 268616 00:14:01.456 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 268616 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.716 16:12:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.623 16:12:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:03.623 00:14:03.623 real 0m15.092s 00:14:03.623 user 0m37.605s 00:14:03.623 sys 0m6.275s 00:14:03.623 16:12:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:03.623 16:12:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.623 ************************************ 00:14:03.623 END TEST nvmf_connect_stress 00:14:03.623 ************************************ 00:14:03.623 16:12:46 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:03.623 16:12:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:03.623 16:12:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:03.623 16:12:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.623 ************************************ 00:14:03.623 START TEST nvmf_fused_ordering 00:14:03.623 ************************************ 00:14:03.623 16:12:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:03.882 * Looking for test storage... 00:14:03.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.882 16:12:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:05.870 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:05.870 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:05.870 Found net devices under 0000:84:00.0: cvl_0_0 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:05.870 Found net devices under 0000:84:00.1: cvl_0_1 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.870 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:14:05.870 00:14:05.871 --- 10.0.0.2 ping statistics --- 00:14:05.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.871 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:05.871 00:14:05.871 --- 10.0.0.1 ping statistics --- 00:14:05.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.871 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=271919 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 271919 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 271919 ']' 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:05.871 16:12:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.130 [2024-07-15 16:12:48.874820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:06.130 [2024-07-15 16:12:48.874893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.130 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.130 [2024-07-15 16:12:48.943984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.130 [2024-07-15 16:12:49.034605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.130 [2024-07-15 16:12:49.034654] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.130 [2024-07-15 16:12:49.034671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.130 [2024-07-15 16:12:49.034684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.130 [2024-07-15 16:12:49.034696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.130 [2024-07-15 16:12:49.034735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 [2024-07-15 16:12:49.184699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 [2024-07-15 16:12:49.200917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 NULL1 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.388 16:12:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:06.388 [2024-07-15 16:12:49.245878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:06.388 [2024-07-15 16:12:49.245918] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271945 ] 00:14:06.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.955 Attached to nqn.2016-06.io.spdk:cnode1 00:14:06.955 Namespace ID: 1 size: 1GB 00:14:06.955 fused_ordering(0) 00:14:06.955 fused_ordering(1) 00:14:06.955 fused_ordering(2) 00:14:06.955 fused_ordering(3) 00:14:06.955 fused_ordering(4) 00:14:06.955 fused_ordering(5) 00:14:06.955 fused_ordering(6) 00:14:06.955 fused_ordering(7) 00:14:06.955 fused_ordering(8) 00:14:06.955 fused_ordering(9) 00:14:06.955 fused_ordering(10) 00:14:06.955 fused_ordering(11) 00:14:06.955 fused_ordering(12) 00:14:06.955 fused_ordering(13) 00:14:06.955 fused_ordering(14) 00:14:06.955 fused_ordering(15) 00:14:06.955 fused_ordering(16) 00:14:06.955 fused_ordering(17) 00:14:06.955 fused_ordering(18) 00:14:06.955 fused_ordering(19) 00:14:06.955 fused_ordering(20) 00:14:06.955 fused_ordering(21) 00:14:06.955 fused_ordering(22) 00:14:06.955 fused_ordering(23) 00:14:06.955 fused_ordering(24) 00:14:06.955 fused_ordering(25) 00:14:06.955 fused_ordering(26) 00:14:06.955 fused_ordering(27) 00:14:06.955 fused_ordering(28) 00:14:06.955 fused_ordering(29) 00:14:06.955 fused_ordering(30) 00:14:06.955 fused_ordering(31) 00:14:06.955 fused_ordering(32) 00:14:06.955 fused_ordering(33) 00:14:06.955 fused_ordering(34) 00:14:06.955 fused_ordering(35) 00:14:06.955 fused_ordering(36) 00:14:06.955 fused_ordering(37) 00:14:06.955 fused_ordering(38) 00:14:06.955 fused_ordering(39) 00:14:06.955 fused_ordering(40) 00:14:06.955 fused_ordering(41) 00:14:06.955 fused_ordering(42) 00:14:06.955 fused_ordering(43) 00:14:06.955 fused_ordering(44) 00:14:06.955 fused_ordering(45) 00:14:06.955 fused_ordering(46) 00:14:06.955 fused_ordering(47) 00:14:06.955 fused_ordering(48) 00:14:06.955 fused_ordering(49) 00:14:06.955 fused_ordering(50) 00:14:06.955 fused_ordering(51) 00:14:06.955 fused_ordering(52) 00:14:06.955 fused_ordering(53) 00:14:06.955 fused_ordering(54) 00:14:06.955 fused_ordering(55) 00:14:06.955 fused_ordering(56) 00:14:06.955 fused_ordering(57) 00:14:06.955 fused_ordering(58) 00:14:06.955 fused_ordering(59) 00:14:06.955 fused_ordering(60) 00:14:06.955 fused_ordering(61) 00:14:06.955 fused_ordering(62) 00:14:06.955 fused_ordering(63) 00:14:06.955 fused_ordering(64) 00:14:06.955 fused_ordering(65) 00:14:06.955 fused_ordering(66) 00:14:06.955 fused_ordering(67) 00:14:06.955 fused_ordering(68) 00:14:06.955 fused_ordering(69) 00:14:06.955 fused_ordering(70) 00:14:06.955 fused_ordering(71) 00:14:06.955 fused_ordering(72) 00:14:06.955 fused_ordering(73) 00:14:06.955 fused_ordering(74) 00:14:06.955 fused_ordering(75) 00:14:06.955 fused_ordering(76) 00:14:06.955 fused_ordering(77) 00:14:06.955 fused_ordering(78) 00:14:06.955 fused_ordering(79) 00:14:06.955 fused_ordering(80) 00:14:06.955 fused_ordering(81) 00:14:06.955 fused_ordering(82) 00:14:06.955 fused_ordering(83) 00:14:06.955 fused_ordering(84) 00:14:06.955 fused_ordering(85) 00:14:06.955 fused_ordering(86) 00:14:06.955 fused_ordering(87) 00:14:06.955 fused_ordering(88) 00:14:06.955 fused_ordering(89) 00:14:06.955 fused_ordering(90) 00:14:06.955 fused_ordering(91) 00:14:06.955 fused_ordering(92) 00:14:06.955 fused_ordering(93) 00:14:06.955 fused_ordering(94) 00:14:06.955 fused_ordering(95) 00:14:06.955 fused_ordering(96) 00:14:06.955 fused_ordering(97) 00:14:06.955 fused_ordering(98) 00:14:06.955 fused_ordering(99) 00:14:06.955 fused_ordering(100) 00:14:06.955 fused_ordering(101) 00:14:06.955 fused_ordering(102) 00:14:06.955 fused_ordering(103) 00:14:06.955 fused_ordering(104) 00:14:06.955 fused_ordering(105) 00:14:06.955 fused_ordering(106) 00:14:06.955 fused_ordering(107) 00:14:06.955 fused_ordering(108) 00:14:06.955 fused_ordering(109) 00:14:06.955 fused_ordering(110) 00:14:06.955 fused_ordering(111) 00:14:06.955 fused_ordering(112) 00:14:06.955 fused_ordering(113) 00:14:06.955 fused_ordering(114) 00:14:06.955 fused_ordering(115) 00:14:06.955 fused_ordering(116) 00:14:06.955 fused_ordering(117) 00:14:06.955 fused_ordering(118) 00:14:06.955 fused_ordering(119) 00:14:06.955 fused_ordering(120) 00:14:06.955 fused_ordering(121) 00:14:06.955 fused_ordering(122) 00:14:06.955 fused_ordering(123) 00:14:06.955 fused_ordering(124) 00:14:06.955 fused_ordering(125) 00:14:06.955 fused_ordering(126) 00:14:06.955 fused_ordering(127) 00:14:06.955 fused_ordering(128) 00:14:06.955 fused_ordering(129) 00:14:06.955 fused_ordering(130) 00:14:06.955 fused_ordering(131) 00:14:06.955 fused_ordering(132) 00:14:06.955 fused_ordering(133) 00:14:06.955 fused_ordering(134) 00:14:06.955 fused_ordering(135) 00:14:06.955 fused_ordering(136) 00:14:06.955 fused_ordering(137) 00:14:06.955 fused_ordering(138) 00:14:06.955 fused_ordering(139) 00:14:06.955 fused_ordering(140) 00:14:06.955 fused_ordering(141) 00:14:06.955 fused_ordering(142) 00:14:06.955 fused_ordering(143) 00:14:06.955 fused_ordering(144) 00:14:06.955 fused_ordering(145) 00:14:06.955 fused_ordering(146) 00:14:06.955 fused_ordering(147) 00:14:06.955 fused_ordering(148) 00:14:06.955 fused_ordering(149) 00:14:06.955 fused_ordering(150) 00:14:06.955 fused_ordering(151) 00:14:06.955 fused_ordering(152) 00:14:06.956 fused_ordering(153) 00:14:06.956 fused_ordering(154) 00:14:06.956 fused_ordering(155) 00:14:06.956 fused_ordering(156) 00:14:06.956 fused_ordering(157) 00:14:06.956 fused_ordering(158) 00:14:06.956 fused_ordering(159) 00:14:06.956 fused_ordering(160) 00:14:06.956 fused_ordering(161) 00:14:06.956 fused_ordering(162) 00:14:06.956 fused_ordering(163) 00:14:06.956 fused_ordering(164) 00:14:06.956 fused_ordering(165) 00:14:06.956 fused_ordering(166) 00:14:06.956 fused_ordering(167) 00:14:06.956 fused_ordering(168) 00:14:06.956 fused_ordering(169) 00:14:06.956 fused_ordering(170) 00:14:06.956 fused_ordering(171) 00:14:06.956 fused_ordering(172) 00:14:06.956 fused_ordering(173) 00:14:06.956 fused_ordering(174) 00:14:06.956 fused_ordering(175) 00:14:06.956 fused_ordering(176) 00:14:06.956 fused_ordering(177) 00:14:06.956 fused_ordering(178) 00:14:06.956 fused_ordering(179) 00:14:06.956 fused_ordering(180) 00:14:06.956 fused_ordering(181) 00:14:06.956 fused_ordering(182) 00:14:06.956 fused_ordering(183) 00:14:06.956 fused_ordering(184) 00:14:06.956 fused_ordering(185) 00:14:06.956 fused_ordering(186) 00:14:06.956 fused_ordering(187) 00:14:06.956 fused_ordering(188) 00:14:06.956 fused_ordering(189) 00:14:06.956 fused_ordering(190) 00:14:06.956 fused_ordering(191) 00:14:06.956 fused_ordering(192) 00:14:06.956 fused_ordering(193) 00:14:06.956 fused_ordering(194) 00:14:06.956 fused_ordering(195) 00:14:06.956 fused_ordering(196) 00:14:06.956 fused_ordering(197) 00:14:06.956 fused_ordering(198) 00:14:06.956 fused_ordering(199) 00:14:06.956 fused_ordering(200) 00:14:06.956 fused_ordering(201) 00:14:06.956 fused_ordering(202) 00:14:06.956 fused_ordering(203) 00:14:06.956 fused_ordering(204) 00:14:06.956 fused_ordering(205) 00:14:07.214 fused_ordering(206) 00:14:07.214 fused_ordering(207) 00:14:07.214 fused_ordering(208) 00:14:07.214 fused_ordering(209) 00:14:07.214 fused_ordering(210) 00:14:07.214 fused_ordering(211) 00:14:07.214 fused_ordering(212) 00:14:07.214 fused_ordering(213) 00:14:07.214 fused_ordering(214) 00:14:07.214 fused_ordering(215) 00:14:07.214 fused_ordering(216) 00:14:07.214 fused_ordering(217) 00:14:07.214 fused_ordering(218) 00:14:07.214 fused_ordering(219) 00:14:07.214 fused_ordering(220) 00:14:07.214 fused_ordering(221) 00:14:07.214 fused_ordering(222) 00:14:07.214 fused_ordering(223) 00:14:07.214 fused_ordering(224) 00:14:07.214 fused_ordering(225) 00:14:07.214 fused_ordering(226) 00:14:07.214 fused_ordering(227) 00:14:07.214 fused_ordering(228) 00:14:07.214 fused_ordering(229) 00:14:07.214 fused_ordering(230) 00:14:07.214 fused_ordering(231) 00:14:07.214 fused_ordering(232) 00:14:07.214 fused_ordering(233) 00:14:07.214 fused_ordering(234) 00:14:07.214 fused_ordering(235) 00:14:07.214 fused_ordering(236) 00:14:07.214 fused_ordering(237) 00:14:07.214 fused_ordering(238) 00:14:07.214 fused_ordering(239) 00:14:07.214 fused_ordering(240) 00:14:07.214 fused_ordering(241) 00:14:07.214 fused_ordering(242) 00:14:07.214 fused_ordering(243) 00:14:07.214 fused_ordering(244) 00:14:07.214 fused_ordering(245) 00:14:07.214 fused_ordering(246) 00:14:07.214 fused_ordering(247) 00:14:07.214 fused_ordering(248) 00:14:07.214 fused_ordering(249) 00:14:07.214 fused_ordering(250) 00:14:07.214 fused_ordering(251) 00:14:07.214 fused_ordering(252) 00:14:07.214 fused_ordering(253) 00:14:07.214 fused_ordering(254) 00:14:07.214 fused_ordering(255) 00:14:07.214 fused_ordering(256) 00:14:07.214 fused_ordering(257) 00:14:07.214 fused_ordering(258) 00:14:07.214 fused_ordering(259) 00:14:07.214 fused_ordering(260) 00:14:07.214 fused_ordering(261) 00:14:07.214 fused_ordering(262) 00:14:07.214 fused_ordering(263) 00:14:07.214 fused_ordering(264) 00:14:07.214 fused_ordering(265) 00:14:07.214 fused_ordering(266) 00:14:07.214 fused_ordering(267) 00:14:07.214 fused_ordering(268) 00:14:07.214 fused_ordering(269) 00:14:07.214 fused_ordering(270) 00:14:07.214 fused_ordering(271) 00:14:07.214 fused_ordering(272) 00:14:07.214 fused_ordering(273) 00:14:07.214 fused_ordering(274) 00:14:07.214 fused_ordering(275) 00:14:07.214 fused_ordering(276) 00:14:07.214 fused_ordering(277) 00:14:07.214 fused_ordering(278) 00:14:07.214 fused_ordering(279) 00:14:07.214 fused_ordering(280) 00:14:07.214 fused_ordering(281) 00:14:07.214 fused_ordering(282) 00:14:07.214 fused_ordering(283) 00:14:07.214 fused_ordering(284) 00:14:07.214 fused_ordering(285) 00:14:07.214 fused_ordering(286) 00:14:07.214 fused_ordering(287) 00:14:07.214 fused_ordering(288) 00:14:07.214 fused_ordering(289) 00:14:07.214 fused_ordering(290) 00:14:07.214 fused_ordering(291) 00:14:07.214 fused_ordering(292) 00:14:07.214 fused_ordering(293) 00:14:07.214 fused_ordering(294) 00:14:07.214 fused_ordering(295) 00:14:07.214 fused_ordering(296) 00:14:07.214 fused_ordering(297) 00:14:07.214 fused_ordering(298) 00:14:07.214 fused_ordering(299) 00:14:07.214 fused_ordering(300) 00:14:07.214 fused_ordering(301) 00:14:07.214 fused_ordering(302) 00:14:07.214 fused_ordering(303) 00:14:07.214 fused_ordering(304) 00:14:07.214 fused_ordering(305) 00:14:07.214 fused_ordering(306) 00:14:07.214 fused_ordering(307) 00:14:07.214 fused_ordering(308) 00:14:07.214 fused_ordering(309) 00:14:07.214 fused_ordering(310) 00:14:07.214 fused_ordering(311) 00:14:07.214 fused_ordering(312) 00:14:07.214 fused_ordering(313) 00:14:07.214 fused_ordering(314) 00:14:07.214 fused_ordering(315) 00:14:07.214 fused_ordering(316) 00:14:07.214 fused_ordering(317) 00:14:07.214 fused_ordering(318) 00:14:07.214 fused_ordering(319) 00:14:07.214 fused_ordering(320) 00:14:07.214 fused_ordering(321) 00:14:07.214 fused_ordering(322) 00:14:07.214 fused_ordering(323) 00:14:07.214 fused_ordering(324) 00:14:07.214 fused_ordering(325) 00:14:07.214 fused_ordering(326) 00:14:07.214 fused_ordering(327) 00:14:07.214 fused_ordering(328) 00:14:07.214 fused_ordering(329) 00:14:07.214 fused_ordering(330) 00:14:07.214 fused_ordering(331) 00:14:07.214 fused_ordering(332) 00:14:07.214 fused_ordering(333) 00:14:07.214 fused_ordering(334) 00:14:07.214 fused_ordering(335) 00:14:07.214 fused_ordering(336) 00:14:07.214 fused_ordering(337) 00:14:07.214 fused_ordering(338) 00:14:07.214 fused_ordering(339) 00:14:07.214 fused_ordering(340) 00:14:07.214 fused_ordering(341) 00:14:07.214 fused_ordering(342) 00:14:07.214 fused_ordering(343) 00:14:07.214 fused_ordering(344) 00:14:07.214 fused_ordering(345) 00:14:07.214 fused_ordering(346) 00:14:07.214 fused_ordering(347) 00:14:07.214 fused_ordering(348) 00:14:07.214 fused_ordering(349) 00:14:07.214 fused_ordering(350) 00:14:07.214 fused_ordering(351) 00:14:07.214 fused_ordering(352) 00:14:07.214 fused_ordering(353) 00:14:07.214 fused_ordering(354) 00:14:07.214 fused_ordering(355) 00:14:07.214 fused_ordering(356) 00:14:07.214 fused_ordering(357) 00:14:07.214 fused_ordering(358) 00:14:07.214 fused_ordering(359) 00:14:07.214 fused_ordering(360) 00:14:07.214 fused_ordering(361) 00:14:07.214 fused_ordering(362) 00:14:07.214 fused_ordering(363) 00:14:07.214 fused_ordering(364) 00:14:07.214 fused_ordering(365) 00:14:07.214 fused_ordering(366) 00:14:07.214 fused_ordering(367) 00:14:07.214 fused_ordering(368) 00:14:07.214 fused_ordering(369) 00:14:07.214 fused_ordering(370) 00:14:07.214 fused_ordering(371) 00:14:07.214 fused_ordering(372) 00:14:07.214 fused_ordering(373) 00:14:07.214 fused_ordering(374) 00:14:07.214 fused_ordering(375) 00:14:07.214 fused_ordering(376) 00:14:07.214 fused_ordering(377) 00:14:07.214 fused_ordering(378) 00:14:07.214 fused_ordering(379) 00:14:07.214 fused_ordering(380) 00:14:07.214 fused_ordering(381) 00:14:07.214 fused_ordering(382) 00:14:07.214 fused_ordering(383) 00:14:07.214 fused_ordering(384) 00:14:07.214 fused_ordering(385) 00:14:07.214 fused_ordering(386) 00:14:07.214 fused_ordering(387) 00:14:07.214 fused_ordering(388) 00:14:07.214 fused_ordering(389) 00:14:07.214 fused_ordering(390) 00:14:07.214 fused_ordering(391) 00:14:07.214 fused_ordering(392) 00:14:07.214 fused_ordering(393) 00:14:07.214 fused_ordering(394) 00:14:07.214 fused_ordering(395) 00:14:07.214 fused_ordering(396) 00:14:07.214 fused_ordering(397) 00:14:07.214 fused_ordering(398) 00:14:07.214 fused_ordering(399) 00:14:07.214 fused_ordering(400) 00:14:07.214 fused_ordering(401) 00:14:07.214 fused_ordering(402) 00:14:07.214 fused_ordering(403) 00:14:07.214 fused_ordering(404) 00:14:07.214 fused_ordering(405) 00:14:07.214 fused_ordering(406) 00:14:07.214 fused_ordering(407) 00:14:07.214 fused_ordering(408) 00:14:07.214 fused_ordering(409) 00:14:07.214 fused_ordering(410) 00:14:07.780 fused_ordering(411) 00:14:07.780 fused_ordering(412) 00:14:07.780 fused_ordering(413) 00:14:07.780 fused_ordering(414) 00:14:07.780 fused_ordering(415) 00:14:07.780 fused_ordering(416) 00:14:07.780 fused_ordering(417) 00:14:07.780 fused_ordering(418) 00:14:07.780 fused_ordering(419) 00:14:07.780 fused_ordering(420) 00:14:07.780 fused_ordering(421) 00:14:07.780 fused_ordering(422) 00:14:07.780 fused_ordering(423) 00:14:07.780 fused_ordering(424) 00:14:07.780 fused_ordering(425) 00:14:07.780 fused_ordering(426) 00:14:07.780 fused_ordering(427) 00:14:07.780 fused_ordering(428) 00:14:07.780 fused_ordering(429) 00:14:07.780 fused_ordering(430) 00:14:07.780 fused_ordering(431) 00:14:07.780 fused_ordering(432) 00:14:07.780 fused_ordering(433) 00:14:07.780 fused_ordering(434) 00:14:07.780 fused_ordering(435) 00:14:07.780 fused_ordering(436) 00:14:07.780 fused_ordering(437) 00:14:07.780 fused_ordering(438) 00:14:07.780 fused_ordering(439) 00:14:07.780 fused_ordering(440) 00:14:07.780 fused_ordering(441) 00:14:07.780 fused_ordering(442) 00:14:07.780 fused_ordering(443) 00:14:07.780 fused_ordering(444) 00:14:07.780 fused_ordering(445) 00:14:07.780 fused_ordering(446) 00:14:07.780 fused_ordering(447) 00:14:07.780 fused_ordering(448) 00:14:07.780 fused_ordering(449) 00:14:07.780 fused_ordering(450) 00:14:07.780 fused_ordering(451) 00:14:07.780 fused_ordering(452) 00:14:07.780 fused_ordering(453) 00:14:07.780 fused_ordering(454) 00:14:07.780 fused_ordering(455) 00:14:07.780 fused_ordering(456) 00:14:07.780 fused_ordering(457) 00:14:07.780 fused_ordering(458) 00:14:07.780 fused_ordering(459) 00:14:07.780 fused_ordering(460) 00:14:07.780 fused_ordering(461) 00:14:07.780 fused_ordering(462) 00:14:07.780 fused_ordering(463) 00:14:07.780 fused_ordering(464) 00:14:07.780 fused_ordering(465) 00:14:07.780 fused_ordering(466) 00:14:07.780 fused_ordering(467) 00:14:07.780 fused_ordering(468) 00:14:07.780 fused_ordering(469) 00:14:07.780 fused_ordering(470) 00:14:07.780 fused_ordering(471) 00:14:07.780 fused_ordering(472) 00:14:07.780 fused_ordering(473) 00:14:07.780 fused_ordering(474) 00:14:07.780 fused_ordering(475) 00:14:07.780 fused_ordering(476) 00:14:07.780 fused_ordering(477) 00:14:07.780 fused_ordering(478) 00:14:07.780 fused_ordering(479) 00:14:07.780 fused_ordering(480) 00:14:07.780 fused_ordering(481) 00:14:07.780 fused_ordering(482) 00:14:07.780 fused_ordering(483) 00:14:07.780 fused_ordering(484) 00:14:07.780 fused_ordering(485) 00:14:07.780 fused_ordering(486) 00:14:07.780 fused_ordering(487) 00:14:07.780 fused_ordering(488) 00:14:07.780 fused_ordering(489) 00:14:07.780 fused_ordering(490) 00:14:07.780 fused_ordering(491) 00:14:07.780 fused_ordering(492) 00:14:07.780 fused_ordering(493) 00:14:07.780 fused_ordering(494) 00:14:07.780 fused_ordering(495) 00:14:07.780 fused_ordering(496) 00:14:07.780 fused_ordering(497) 00:14:07.780 fused_ordering(498) 00:14:07.780 fused_ordering(499) 00:14:07.780 fused_ordering(500) 00:14:07.780 fused_ordering(501) 00:14:07.780 fused_ordering(502) 00:14:07.780 fused_ordering(503) 00:14:07.780 fused_ordering(504) 00:14:07.780 fused_ordering(505) 00:14:07.780 fused_ordering(506) 00:14:07.780 fused_ordering(507) 00:14:07.780 fused_ordering(508) 00:14:07.780 fused_ordering(509) 00:14:07.780 fused_ordering(510) 00:14:07.780 fused_ordering(511) 00:14:07.780 fused_ordering(512) 00:14:07.780 fused_ordering(513) 00:14:07.780 fused_ordering(514) 00:14:07.780 fused_ordering(515) 00:14:07.780 fused_ordering(516) 00:14:07.780 fused_ordering(517) 00:14:07.780 fused_ordering(518) 00:14:07.780 fused_ordering(519) 00:14:07.780 fused_ordering(520) 00:14:07.780 fused_ordering(521) 00:14:07.780 fused_ordering(522) 00:14:07.780 fused_ordering(523) 00:14:07.780 fused_ordering(524) 00:14:07.780 fused_ordering(525) 00:14:07.780 fused_ordering(526) 00:14:07.780 fused_ordering(527) 00:14:07.780 fused_ordering(528) 00:14:07.780 fused_ordering(529) 00:14:07.780 fused_ordering(530) 00:14:07.780 fused_ordering(531) 00:14:07.780 fused_ordering(532) 00:14:07.780 fused_ordering(533) 00:14:07.780 fused_ordering(534) 00:14:07.780 fused_ordering(535) 00:14:07.780 fused_ordering(536) 00:14:07.780 fused_ordering(537) 00:14:07.780 fused_ordering(538) 00:14:07.780 fused_ordering(539) 00:14:07.780 fused_ordering(540) 00:14:07.780 fused_ordering(541) 00:14:07.780 fused_ordering(542) 00:14:07.780 fused_ordering(543) 00:14:07.780 fused_ordering(544) 00:14:07.780 fused_ordering(545) 00:14:07.780 fused_ordering(546) 00:14:07.780 fused_ordering(547) 00:14:07.780 fused_ordering(548) 00:14:07.780 fused_ordering(549) 00:14:07.780 fused_ordering(550) 00:14:07.780 fused_ordering(551) 00:14:07.780 fused_ordering(552) 00:14:07.780 fused_ordering(553) 00:14:07.780 fused_ordering(554) 00:14:07.780 fused_ordering(555) 00:14:07.780 fused_ordering(556) 00:14:07.780 fused_ordering(557) 00:14:07.780 fused_ordering(558) 00:14:07.780 fused_ordering(559) 00:14:07.780 fused_ordering(560) 00:14:07.781 fused_ordering(561) 00:14:07.781 fused_ordering(562) 00:14:07.781 fused_ordering(563) 00:14:07.781 fused_ordering(564) 00:14:07.781 fused_ordering(565) 00:14:07.781 fused_ordering(566) 00:14:07.781 fused_ordering(567) 00:14:07.781 fused_ordering(568) 00:14:07.781 fused_ordering(569) 00:14:07.781 fused_ordering(570) 00:14:07.781 fused_ordering(571) 00:14:07.781 fused_ordering(572) 00:14:07.781 fused_ordering(573) 00:14:07.781 fused_ordering(574) 00:14:07.781 fused_ordering(575) 00:14:07.781 fused_ordering(576) 00:14:07.781 fused_ordering(577) 00:14:07.781 fused_ordering(578) 00:14:07.781 fused_ordering(579) 00:14:07.781 fused_ordering(580) 00:14:07.781 fused_ordering(581) 00:14:07.781 fused_ordering(582) 00:14:07.781 fused_ordering(583) 00:14:07.781 fused_ordering(584) 00:14:07.781 fused_ordering(585) 00:14:07.781 fused_ordering(586) 00:14:07.781 fused_ordering(587) 00:14:07.781 fused_ordering(588) 00:14:07.781 fused_ordering(589) 00:14:07.781 fused_ordering(590) 00:14:07.781 fused_ordering(591) 00:14:07.781 fused_ordering(592) 00:14:07.781 fused_ordering(593) 00:14:07.781 fused_ordering(594) 00:14:07.781 fused_ordering(595) 00:14:07.781 fused_ordering(596) 00:14:07.781 fused_ordering(597) 00:14:07.781 fused_ordering(598) 00:14:07.781 fused_ordering(599) 00:14:07.781 fused_ordering(600) 00:14:07.781 fused_ordering(601) 00:14:07.781 fused_ordering(602) 00:14:07.781 fused_ordering(603) 00:14:07.781 fused_ordering(604) 00:14:07.781 fused_ordering(605) 00:14:07.781 fused_ordering(606) 00:14:07.781 fused_ordering(607) 00:14:07.781 fused_ordering(608) 00:14:07.781 fused_ordering(609) 00:14:07.781 fused_ordering(610) 00:14:07.781 fused_ordering(611) 00:14:07.781 fused_ordering(612) 00:14:07.781 fused_ordering(613) 00:14:07.781 fused_ordering(614) 00:14:07.781 fused_ordering(615) 00:14:08.348 fused_ordering(616) 00:14:08.348 fused_ordering(617) 00:14:08.348 fused_ordering(618) 00:14:08.348 fused_ordering(619) 00:14:08.348 fused_ordering(620) 00:14:08.348 fused_ordering(621) 00:14:08.348 fused_ordering(622) 00:14:08.348 fused_ordering(623) 00:14:08.348 fused_ordering(624) 00:14:08.348 fused_ordering(625) 00:14:08.348 fused_ordering(626) 00:14:08.348 fused_ordering(627) 00:14:08.348 fused_ordering(628) 00:14:08.348 fused_ordering(629) 00:14:08.348 fused_ordering(630) 00:14:08.348 fused_ordering(631) 00:14:08.348 fused_ordering(632) 00:14:08.348 fused_ordering(633) 00:14:08.348 fused_ordering(634) 00:14:08.348 fused_ordering(635) 00:14:08.348 fused_ordering(636) 00:14:08.348 fused_ordering(637) 00:14:08.348 fused_ordering(638) 00:14:08.348 fused_ordering(639) 00:14:08.348 fused_ordering(640) 00:14:08.348 fused_ordering(641) 00:14:08.348 fused_ordering(642) 00:14:08.348 fused_ordering(643) 00:14:08.348 fused_ordering(644) 00:14:08.348 fused_ordering(645) 00:14:08.348 fused_ordering(646) 00:14:08.348 fused_ordering(647) 00:14:08.348 fused_ordering(648) 00:14:08.348 fused_ordering(649) 00:14:08.348 fused_ordering(650) 00:14:08.348 fused_ordering(651) 00:14:08.348 fused_ordering(652) 00:14:08.348 fused_ordering(653) 00:14:08.348 fused_ordering(654) 00:14:08.348 fused_ordering(655) 00:14:08.348 fused_ordering(656) 00:14:08.348 fused_ordering(657) 00:14:08.348 fused_ordering(658) 00:14:08.348 fused_ordering(659) 00:14:08.348 fused_ordering(660) 00:14:08.348 fused_ordering(661) 00:14:08.348 fused_ordering(662) 00:14:08.348 fused_ordering(663) 00:14:08.348 fused_ordering(664) 00:14:08.348 fused_ordering(665) 00:14:08.348 fused_ordering(666) 00:14:08.348 fused_ordering(667) 00:14:08.348 fused_ordering(668) 00:14:08.348 fused_ordering(669) 00:14:08.348 fused_ordering(670) 00:14:08.348 fused_ordering(671) 00:14:08.348 fused_ordering(672) 00:14:08.348 fused_ordering(673) 00:14:08.348 fused_ordering(674) 00:14:08.348 fused_ordering(675) 00:14:08.348 fused_ordering(676) 00:14:08.348 fused_ordering(677) 00:14:08.348 fused_ordering(678) 00:14:08.348 fused_ordering(679) 00:14:08.348 fused_ordering(680) 00:14:08.348 fused_ordering(681) 00:14:08.348 fused_ordering(682) 00:14:08.348 fused_ordering(683) 00:14:08.348 fused_ordering(684) 00:14:08.348 fused_ordering(685) 00:14:08.348 fused_ordering(686) 00:14:08.348 fused_ordering(687) 00:14:08.348 fused_ordering(688) 00:14:08.348 fused_ordering(689) 00:14:08.348 fused_ordering(690) 00:14:08.348 fused_ordering(691) 00:14:08.348 fused_ordering(692) 00:14:08.348 fused_ordering(693) 00:14:08.348 fused_ordering(694) 00:14:08.348 fused_ordering(695) 00:14:08.348 fused_ordering(696) 00:14:08.348 fused_ordering(697) 00:14:08.348 fused_ordering(698) 00:14:08.348 fused_ordering(699) 00:14:08.348 fused_ordering(700) 00:14:08.348 fused_ordering(701) 00:14:08.348 fused_ordering(702) 00:14:08.348 fused_ordering(703) 00:14:08.348 fused_ordering(704) 00:14:08.348 fused_ordering(705) 00:14:08.348 fused_ordering(706) 00:14:08.348 fused_ordering(707) 00:14:08.348 fused_ordering(708) 00:14:08.348 fused_ordering(709) 00:14:08.348 fused_ordering(710) 00:14:08.348 fused_ordering(711) 00:14:08.348 fused_ordering(712) 00:14:08.348 fused_ordering(713) 00:14:08.348 fused_ordering(714) 00:14:08.348 fused_ordering(715) 00:14:08.348 fused_ordering(716) 00:14:08.348 fused_ordering(717) 00:14:08.348 fused_ordering(718) 00:14:08.348 fused_ordering(719) 00:14:08.348 fused_ordering(720) 00:14:08.348 fused_ordering(721) 00:14:08.348 fused_ordering(722) 00:14:08.348 fused_ordering(723) 00:14:08.348 fused_ordering(724) 00:14:08.348 fused_ordering(725) 00:14:08.348 fused_ordering(726) 00:14:08.348 fused_ordering(727) 00:14:08.348 fused_ordering(728) 00:14:08.348 fused_ordering(729) 00:14:08.348 fused_ordering(730) 00:14:08.348 fused_ordering(731) 00:14:08.348 fused_ordering(732) 00:14:08.348 fused_ordering(733) 00:14:08.348 fused_ordering(734) 00:14:08.348 fused_ordering(735) 00:14:08.348 fused_ordering(736) 00:14:08.348 fused_ordering(737) 00:14:08.348 fused_ordering(738) 00:14:08.348 fused_ordering(739) 00:14:08.348 fused_ordering(740) 00:14:08.348 fused_ordering(741) 00:14:08.348 fused_ordering(742) 00:14:08.348 fused_ordering(743) 00:14:08.348 fused_ordering(744) 00:14:08.348 fused_ordering(745) 00:14:08.348 fused_ordering(746) 00:14:08.348 fused_ordering(747) 00:14:08.348 fused_ordering(748) 00:14:08.348 fused_ordering(749) 00:14:08.348 fused_ordering(750) 00:14:08.348 fused_ordering(751) 00:14:08.348 fused_ordering(752) 00:14:08.348 fused_ordering(753) 00:14:08.348 fused_ordering(754) 00:14:08.348 fused_ordering(755) 00:14:08.348 fused_ordering(756) 00:14:08.348 fused_ordering(757) 00:14:08.348 fused_ordering(758) 00:14:08.348 fused_ordering(759) 00:14:08.348 fused_ordering(760) 00:14:08.348 fused_ordering(761) 00:14:08.348 fused_ordering(762) 00:14:08.348 fused_ordering(763) 00:14:08.348 fused_ordering(764) 00:14:08.348 fused_ordering(765) 00:14:08.348 fused_ordering(766) 00:14:08.348 fused_ordering(767) 00:14:08.348 fused_ordering(768) 00:14:08.348 fused_ordering(769) 00:14:08.348 fused_ordering(770) 00:14:08.348 fused_ordering(771) 00:14:08.348 fused_ordering(772) 00:14:08.348 fused_ordering(773) 00:14:08.348 fused_ordering(774) 00:14:08.348 fused_ordering(775) 00:14:08.348 fused_ordering(776) 00:14:08.348 fused_ordering(777) 00:14:08.348 fused_ordering(778) 00:14:08.348 fused_ordering(779) 00:14:08.348 fused_ordering(780) 00:14:08.348 fused_ordering(781) 00:14:08.348 fused_ordering(782) 00:14:08.348 fused_ordering(783) 00:14:08.348 fused_ordering(784) 00:14:08.348 fused_ordering(785) 00:14:08.348 fused_ordering(786) 00:14:08.348 fused_ordering(787) 00:14:08.348 fused_ordering(788) 00:14:08.348 fused_ordering(789) 00:14:08.348 fused_ordering(790) 00:14:08.348 fused_ordering(791) 00:14:08.348 fused_ordering(792) 00:14:08.348 fused_ordering(793) 00:14:08.348 fused_ordering(794) 00:14:08.348 fused_ordering(795) 00:14:08.348 fused_ordering(796) 00:14:08.348 fused_ordering(797) 00:14:08.348 fused_ordering(798) 00:14:08.348 fused_ordering(799) 00:14:08.348 fused_ordering(800) 00:14:08.348 fused_ordering(801) 00:14:08.348 fused_ordering(802) 00:14:08.348 fused_ordering(803) 00:14:08.348 fused_ordering(804) 00:14:08.348 fused_ordering(805) 00:14:08.348 fused_ordering(806) 00:14:08.348 fused_ordering(807) 00:14:08.348 fused_ordering(808) 00:14:08.348 fused_ordering(809) 00:14:08.348 fused_ordering(810) 00:14:08.348 fused_ordering(811) 00:14:08.348 fused_ordering(812) 00:14:08.348 fused_ordering(813) 00:14:08.348 fused_ordering(814) 00:14:08.348 fused_ordering(815) 00:14:08.348 fused_ordering(816) 00:14:08.348 fused_ordering(817) 00:14:08.348 fused_ordering(818) 00:14:08.348 fused_ordering(819) 00:14:08.348 fused_ordering(820) 00:14:09.283 fused_ordering(821) 00:14:09.283 fused_ordering(822) 00:14:09.283 fused_ordering(823) 00:14:09.283 fused_ordering(824) 00:14:09.283 fused_ordering(825) 00:14:09.283 fused_ordering(826) 00:14:09.283 fused_ordering(827) 00:14:09.283 fused_ordering(828) 00:14:09.283 fused_ordering(829) 00:14:09.283 fused_ordering(830) 00:14:09.283 fused_ordering(831) 00:14:09.283 fused_ordering(832) 00:14:09.283 fused_ordering(833) 00:14:09.283 fused_ordering(834) 00:14:09.283 fused_ordering(835) 00:14:09.283 fused_ordering(836) 00:14:09.283 fused_ordering(837) 00:14:09.283 fused_ordering(838) 00:14:09.283 fused_ordering(839) 00:14:09.283 fused_ordering(840) 00:14:09.283 fused_ordering(841) 00:14:09.283 fused_ordering(842) 00:14:09.283 fused_ordering(843) 00:14:09.284 fused_ordering(844) 00:14:09.284 fused_ordering(845) 00:14:09.284 fused_ordering(846) 00:14:09.284 fused_ordering(847) 00:14:09.284 fused_ordering(848) 00:14:09.284 fused_ordering(849) 00:14:09.284 fused_ordering(850) 00:14:09.284 fused_ordering(851) 00:14:09.284 fused_ordering(852) 00:14:09.284 fused_ordering(853) 00:14:09.284 fused_ordering(854) 00:14:09.284 fused_ordering(855) 00:14:09.284 fused_ordering(856) 00:14:09.284 fused_ordering(857) 00:14:09.284 fused_ordering(858) 00:14:09.284 fused_ordering(859) 00:14:09.284 fused_ordering(860) 00:14:09.284 fused_ordering(861) 00:14:09.284 fused_ordering(862) 00:14:09.284 fused_ordering(863) 00:14:09.284 fused_ordering(864) 00:14:09.284 fused_ordering(865) 00:14:09.284 fused_ordering(866) 00:14:09.284 fused_ordering(867) 00:14:09.284 fused_ordering(868) 00:14:09.284 fused_ordering(869) 00:14:09.284 fused_ordering(870) 00:14:09.284 fused_ordering(871) 00:14:09.284 fused_ordering(872) 00:14:09.284 fused_ordering(873) 00:14:09.284 fused_ordering(874) 00:14:09.284 fused_ordering(875) 00:14:09.284 fused_ordering(876) 00:14:09.284 fused_ordering(877) 00:14:09.284 fused_ordering(878) 00:14:09.284 fused_ordering(879) 00:14:09.284 fused_ordering(880) 00:14:09.284 fused_ordering(881) 00:14:09.284 fused_ordering(882) 00:14:09.284 fused_ordering(883) 00:14:09.284 fused_ordering(884) 00:14:09.284 fused_ordering(885) 00:14:09.284 fused_ordering(886) 00:14:09.284 fused_ordering(887) 00:14:09.284 fused_ordering(888) 00:14:09.284 fused_ordering(889) 00:14:09.284 fused_ordering(890) 00:14:09.284 fused_ordering(891) 00:14:09.284 fused_ordering(892) 00:14:09.284 fused_ordering(893) 00:14:09.284 fused_ordering(894) 00:14:09.284 fused_ordering(895) 00:14:09.284 fused_ordering(896) 00:14:09.284 fused_ordering(897) 00:14:09.284 fused_ordering(898) 00:14:09.284 fused_ordering(899) 00:14:09.284 fused_ordering(900) 00:14:09.284 fused_ordering(901) 00:14:09.284 fused_ordering(902) 00:14:09.284 fused_ordering(903) 00:14:09.284 fused_ordering(904) 00:14:09.284 fused_ordering(905) 00:14:09.284 fused_ordering(906) 00:14:09.284 fused_ordering(907) 00:14:09.284 fused_ordering(908) 00:14:09.284 fused_ordering(909) 00:14:09.284 fused_ordering(910) 00:14:09.284 fused_ordering(911) 00:14:09.284 fused_ordering(912) 00:14:09.284 fused_ordering(913) 00:14:09.284 fused_ordering(914) 00:14:09.284 fused_ordering(915) 00:14:09.284 fused_ordering(916) 00:14:09.284 fused_ordering(917) 00:14:09.284 fused_ordering(918) 00:14:09.284 fused_ordering(919) 00:14:09.284 fused_ordering(920) 00:14:09.284 fused_ordering(921) 00:14:09.284 fused_ordering(922) 00:14:09.284 fused_ordering(923) 00:14:09.284 fused_ordering(924) 00:14:09.284 fused_ordering(925) 00:14:09.284 fused_ordering(926) 00:14:09.284 fused_ordering(927) 00:14:09.284 fused_ordering(928) 00:14:09.284 fused_ordering(929) 00:14:09.284 fused_ordering(930) 00:14:09.284 fused_ordering(931) 00:14:09.284 fused_ordering(932) 00:14:09.284 fused_ordering(933) 00:14:09.284 fused_ordering(934) 00:14:09.284 fused_ordering(935) 00:14:09.284 fused_ordering(936) 00:14:09.284 fused_ordering(937) 00:14:09.284 fused_ordering(938) 00:14:09.284 fused_ordering(939) 00:14:09.284 fused_ordering(940) 00:14:09.284 fused_ordering(941) 00:14:09.284 fused_ordering(942) 00:14:09.284 fused_ordering(943) 00:14:09.284 fused_ordering(944) 00:14:09.284 fused_ordering(945) 00:14:09.284 fused_ordering(946) 00:14:09.284 fused_ordering(947) 00:14:09.284 fused_ordering(948) 00:14:09.284 fused_ordering(949) 00:14:09.284 fused_ordering(950) 00:14:09.284 fused_ordering(951) 00:14:09.284 fused_ordering(952) 00:14:09.284 fused_ordering(953) 00:14:09.284 fused_ordering(954) 00:14:09.284 fused_ordering(955) 00:14:09.284 fused_ordering(956) 00:14:09.284 fused_ordering(957) 00:14:09.284 fused_ordering(958) 00:14:09.284 fused_ordering(959) 00:14:09.284 fused_ordering(960) 00:14:09.284 fused_ordering(961) 00:14:09.284 fused_ordering(962) 00:14:09.284 fused_ordering(963) 00:14:09.284 fused_ordering(964) 00:14:09.284 fused_ordering(965) 00:14:09.284 fused_ordering(966) 00:14:09.284 fused_ordering(967) 00:14:09.284 fused_ordering(968) 00:14:09.284 fused_ordering(969) 00:14:09.284 fused_ordering(970) 00:14:09.284 fused_ordering(971) 00:14:09.284 fused_ordering(972) 00:14:09.284 fused_ordering(973) 00:14:09.284 fused_ordering(974) 00:14:09.284 fused_ordering(975) 00:14:09.284 fused_ordering(976) 00:14:09.284 fused_ordering(977) 00:14:09.284 fused_ordering(978) 00:14:09.284 fused_ordering(979) 00:14:09.284 fused_ordering(980) 00:14:09.284 fused_ordering(981) 00:14:09.284 fused_ordering(982) 00:14:09.284 fused_ordering(983) 00:14:09.284 fused_ordering(984) 00:14:09.284 fused_ordering(985) 00:14:09.284 fused_ordering(986) 00:14:09.284 fused_ordering(987) 00:14:09.284 fused_ordering(988) 00:14:09.284 fused_ordering(989) 00:14:09.284 fused_ordering(990) 00:14:09.284 fused_ordering(991) 00:14:09.284 fused_ordering(992) 00:14:09.284 fused_ordering(993) 00:14:09.284 fused_ordering(994) 00:14:09.284 fused_ordering(995) 00:14:09.284 fused_ordering(996) 00:14:09.284 fused_ordering(997) 00:14:09.284 fused_ordering(998) 00:14:09.284 fused_ordering(999) 00:14:09.284 fused_ordering(1000) 00:14:09.284 fused_ordering(1001) 00:14:09.284 fused_ordering(1002) 00:14:09.284 fused_ordering(1003) 00:14:09.284 fused_ordering(1004) 00:14:09.284 fused_ordering(1005) 00:14:09.284 fused_ordering(1006) 00:14:09.284 fused_ordering(1007) 00:14:09.284 fused_ordering(1008) 00:14:09.284 fused_ordering(1009) 00:14:09.284 fused_ordering(1010) 00:14:09.284 fused_ordering(1011) 00:14:09.284 fused_ordering(1012) 00:14:09.284 fused_ordering(1013) 00:14:09.284 fused_ordering(1014) 00:14:09.284 fused_ordering(1015) 00:14:09.284 fused_ordering(1016) 00:14:09.284 fused_ordering(1017) 00:14:09.284 fused_ordering(1018) 00:14:09.284 fused_ordering(1019) 00:14:09.284 fused_ordering(1020) 00:14:09.284 fused_ordering(1021) 00:14:09.284 fused_ordering(1022) 00:14:09.284 fused_ordering(1023) 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.284 rmmod nvme_tcp 00:14:09.284 rmmod nvme_fabrics 00:14:09.284 rmmod nvme_keyring 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 271919 ']' 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 271919 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 271919 ']' 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 271919 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 271919 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 271919' 00:14:09.284 killing process with pid 271919 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 271919 00:14:09.284 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 271919 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.543 16:12:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.074 16:12:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.074 00:14:12.074 real 0m7.886s 00:14:12.074 user 0m5.291s 00:14:12.074 sys 0m3.725s 00:14:12.074 16:12:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.074 16:12:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.074 ************************************ 00:14:12.074 END TEST nvmf_fused_ordering 00:14:12.074 ************************************ 00:14:12.074 16:12:54 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:12.074 16:12:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.074 16:12:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.074 16:12:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.074 ************************************ 00:14:12.074 START TEST nvmf_delete_subsystem 00:14:12.074 ************************************ 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:12.074 * Looking for test storage... 00:14:12.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.074 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.075 16:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.977 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:13.978 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:13.978 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:13.978 Found net devices under 0000:84:00.0: cvl_0_0 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:13.978 Found net devices under 0000:84:00.1: cvl_0_1 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:14:13.978 00:14:13.978 --- 10.0.0.2 ping statistics --- 00:14:13.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.978 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:14:13.978 00:14:13.978 --- 10.0.0.1 ping statistics --- 00:14:13.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.978 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=274281 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 274281 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 274281 ']' 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.978 16:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.978 [2024-07-15 16:12:56.730622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:13.978 [2024-07-15 16:12:56.730703] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.978 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.978 [2024-07-15 16:12:56.796937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.978 [2024-07-15 16:12:56.886417] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.978 [2024-07-15 16:12:56.886473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.978 [2024-07-15 16:12:56.886501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.978 [2024-07-15 16:12:56.886512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.978 [2024-07-15 16:12:56.886522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.978 [2024-07-15 16:12:56.886602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.978 [2024-07-15 16:12:56.886606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 [2024-07-15 16:12:57.033892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 [2024-07-15 16:12:57.050146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 NULL1 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 Delay0 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=274306 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:14.236 16:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:14.236 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.236 [2024-07-15 16:12:57.124735] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:16.130 16:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.130 16:12:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.130 16:12:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 [2024-07-15 16:12:59.215691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65d40 is same with the state(5) to be set 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:16.388 Write completed with error (sct=0, sc=8) 00:14:16.388 Read completed with error (sct=0, sc=8) 00:14:16.388 starting I/O failed: -6 00:14:17.321 [2024-07-15 16:13:00.181040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7d620 is same with the state(5) to be set 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 [2024-07-15 16:13:00.218014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60ec0 is same with the state(5) to be set 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 [2024-07-15 16:13:00.218973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f33a8000c00 is same with the state(5) to be set 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Write completed with error (sct=0, sc=8) 00:14:17.321 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 [2024-07-15 16:13:00.219283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f33a800c2f0 is same with the state(5) to be set 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Write completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 Read completed with error (sct=0, sc=8) 00:14:17.322 [2024-07-15 16:13:00.219489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc60b00 is same with the state(5) to be set 00:14:17.322 Initializing NVMe Controllers 00:14:17.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.322 Controller IO queue size 128, less than required. 00:14:17.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:17.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:17.322 Initialization complete. Launching workers. 00:14:17.322 ======================================================== 00:14:17.322 Latency(us) 00:14:17.322 Device Information : IOPS MiB/s Average min max 00:14:17.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.83 0.08 926878.31 594.74 2003337.44 00:14:17.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.76 0.08 1005480.61 503.46 2003507.11 00:14:17.322 ======================================================== 00:14:17.322 Total : 336.59 0.16 967222.85 503.46 2003507.11 00:14:17.322 00:14:17.322 [2024-07-15 16:13:00.220736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7d620 (9): Bad file descriptor 00:14:17.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:17.322 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.322 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:17.322 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 274306 00:14:17.322 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 274306 00:14:17.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (274306) - No such process 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 274306 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 274306 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 274306 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 [2024-07-15 16:13:00.744857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=274710 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:17.888 16:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.888 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.888 [2024-07-15 16:13:00.807995] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:18.452 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.452 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:18.452 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.017 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.017 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:19.017 16:13:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.583 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.583 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:19.583 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:19.840 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:19.840 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:19.840 16:13:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.405 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:20.405 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:20.405 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.970 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:20.970 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:20.970 16:13:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:20.970 Initializing NVMe Controllers 00:14:20.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.970 Controller IO queue size 128, less than required. 00:14:20.970 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:20.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:20.970 Initialization complete. Launching workers. 00:14:20.970 ======================================================== 00:14:20.970 Latency(us) 00:14:20.970 Device Information : IOPS MiB/s Average min max 00:14:20.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004985.69 1000196.84 1043451.59 00:14:20.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005070.98 1000255.49 1041256.44 00:14:20.970 ======================================================== 00:14:20.970 Total : 256.00 0.12 1005028.33 1000196.84 1043451.59 00:14:20.970 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 274710 00:14:21.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (274710) - No such process 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 274710 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.536 rmmod nvme_tcp 00:14:21.536 rmmod nvme_fabrics 00:14:21.536 rmmod nvme_keyring 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 274281 ']' 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 274281 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 274281 ']' 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 274281 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 274281 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 274281' 00:14:21.536 killing process with pid 274281 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 274281 00:14:21.536 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 274281 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.795 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.796 16:13:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.718 16:13:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.718 00:14:23.719 real 0m12.114s 00:14:23.719 user 0m27.467s 00:14:23.719 sys 0m2.950s 00:14:23.719 16:13:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.719 16:13:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.719 ************************************ 00:14:23.719 END TEST nvmf_delete_subsystem 00:14:23.719 ************************************ 00:14:23.719 16:13:06 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:23.719 16:13:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:23.719 16:13:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.719 16:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.719 ************************************ 00:14:23.719 START TEST nvmf_ns_masking 00:14:23.719 ************************************ 00:14:23.719 16:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:23.978 * Looking for test storage... 00:14:23.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:23.978 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3203406f-d645-4f72-b726-fca9de84c697 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.979 16:13:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:25.898 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:25.898 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:25.898 Found net devices under 0000:84:00.0: cvl_0_0 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:25.898 Found net devices under 0000:84:00.1: cvl_0_1 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.898 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:14:26.156 00:14:26.156 --- 10.0.0.2 ping statistics --- 00:14:26.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.156 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:14:26.156 00:14:26.156 --- 10.0.0.1 ping statistics --- 00:14:26.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.156 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=277129 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 277129 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 277129 ']' 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:26.156 16:13:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.156 [2024-07-15 16:13:08.967636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:26.156 [2024-07-15 16:13:08.967711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.156 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.156 [2024-07-15 16:13:09.037471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.157 [2024-07-15 16:13:09.130316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.157 [2024-07-15 16:13:09.130373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.157 [2024-07-15 16:13:09.130388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.157 [2024-07-15 16:13:09.130401] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.157 [2024-07-15 16:13:09.130413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.157 [2024-07-15 16:13:09.130497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.157 [2024-07-15 16:13:09.130566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.157 [2024-07-15 16:13:09.130593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.157 [2024-07-15 16:13:09.130594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.414 16:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.672 [2024-07-15 16:13:09.497231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.672 16:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:26.672 16:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:26.672 16:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.930 Malloc1 00:14:26.930 16:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:27.188 Malloc2 00:14:27.188 16:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.446 16:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:27.710 16:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.275 [2024-07-15 16:13:10.953547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.275 16:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:28.275 16:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3203406f-d645-4f72-b726-fca9de84c697 -a 10.0.0.2 -s 4420 -i 4 00:14:28.275 16:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.275 16:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:28.275 16:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.275 16:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:28.275 16:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:30.231 [ 0]:0x1 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.231 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:30.516 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c4ad3d109bca49c59065647799de805f 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c4ad3d109bca49c59065647799de805f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:30.517 [ 0]:0x1 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c4ad3d109bca49c59065647799de805f 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c4ad3d109bca49c59065647799de805f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:30.517 [ 1]:0x2 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:30.517 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:30.772 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:30.772 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.772 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:30.772 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.772 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.028 16:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:31.284 16:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:31.285 16:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3203406f-d645-4f72-b726-fca9de84c697 -a 10.0.0.2 -s 4420 -i 4 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:31.541 16:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.438 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:33.695 [ 0]:0x2 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.695 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:33.952 [ 0]:0x1 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c4ad3d109bca49c59065647799de805f 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c4ad3d109bca49c59065647799de805f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:33.952 [ 1]:0x2 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.952 16:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:34.210 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.466 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:34.467 [ 0]:0x2 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.467 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:34.723 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:34.723 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3203406f-d645-4f72-b726-fca9de84c697 -a 10.0.0.2 -s 4420 -i 4 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:34.980 16:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:36.872 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.129 [ 0]:0x1 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.129 16:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c4ad3d109bca49c59065647799de805f 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c4ad3d109bca49c59065647799de805f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.129 [ 1]:0x2 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.129 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.694 [ 0]:0x2 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:37.694 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:37.952 [2024-07-15 16:13:20.693149] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:37.952 request: 00:14:37.952 { 00:14:37.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.952 "nsid": 2, 00:14:37.952 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.952 "method": "nvmf_ns_remove_host", 00:14:37.952 "req_id": 1 00:14:37.952 } 00:14:37.952 Got JSON-RPC error response 00:14:37.952 response: 00:14:37.952 { 00:14:37.952 "code": -32602, 00:14:37.952 "message": "Invalid parameters" 00:14:37.952 } 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:37.952 [ 0]:0x2 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3b8cc588ff734cc2b0e6a717d39932c8 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3b8cc588ff734cc2b0e6a717d39932c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.952 16:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.209 rmmod nvme_tcp 00:14:38.209 rmmod nvme_fabrics 00:14:38.209 rmmod nvme_keyring 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 277129 ']' 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 277129 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 277129 ']' 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 277129 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 277129 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 277129' 00:14:38.209 killing process with pid 277129 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 277129 00:14:38.209 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 277129 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.774 16:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.674 16:13:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.674 00:14:40.674 real 0m16.854s 00:14:40.674 user 0m52.784s 00:14:40.674 sys 0m3.796s 00:14:40.674 16:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.674 16:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.674 ************************************ 00:14:40.674 END TEST nvmf_ns_masking 00:14:40.674 ************************************ 00:14:40.674 16:13:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:40.674 16:13:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.674 16:13:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:40.674 16:13:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.674 16:13:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:40.674 ************************************ 00:14:40.674 START TEST nvmf_nvme_cli 00:14:40.674 ************************************ 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.674 * Looking for test storage... 00:14:40.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:40.674 16:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.675 16:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.200 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:43.201 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:43.201 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:43.201 Found net devices under 0000:84:00.0: cvl_0_0 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:43.201 Found net devices under 0000:84:00.1: cvl_0_1 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:14:43.201 00:14:43.201 --- 10.0.0.2 ping statistics --- 00:14:43.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.201 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:14:43.201 00:14:43.201 --- 10.0.0.1 ping statistics --- 00:14:43.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.201 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=280636 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 280636 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 280636 ']' 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:43.201 16:13:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.201 [2024-07-15 16:13:25.835933] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:43.201 [2024-07-15 16:13:25.836032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.201 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.201 [2024-07-15 16:13:25.914510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.201 [2024-07-15 16:13:26.010591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.201 [2024-07-15 16:13:26.010653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.201 [2024-07-15 16:13:26.010669] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.201 [2024-07-15 16:13:26.010683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.201 [2024-07-15 16:13:26.010694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.201 [2024-07-15 16:13:26.010756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.201 [2024-07-15 16:13:26.010815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.201 [2024-07-15 16:13:26.010847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.201 [2024-07-15 16:13:26.010849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.201 [2024-07-15 16:13:26.154307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.201 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 Malloc0 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 Malloc1 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 [2024-07-15 16:13:26.236386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:43.459 00:14:43.459 Discovery Log Number of Records 2, Generation counter 2 00:14:43.459 =====Discovery Log Entry 0====== 00:14:43.459 trtype: tcp 00:14:43.459 adrfam: ipv4 00:14:43.459 subtype: current discovery subsystem 00:14:43.459 treq: not required 00:14:43.459 portid: 0 00:14:43.459 trsvcid: 4420 00:14:43.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:43.459 traddr: 10.0.0.2 00:14:43.459 eflags: explicit discovery connections, duplicate discovery information 00:14:43.459 sectype: none 00:14:43.459 =====Discovery Log Entry 1====== 00:14:43.459 trtype: tcp 00:14:43.459 adrfam: ipv4 00:14:43.459 subtype: nvme subsystem 00:14:43.459 treq: not required 00:14:43.459 portid: 0 00:14:43.459 trsvcid: 4420 00:14:43.459 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:43.459 traddr: 10.0.0.2 00:14:43.459 eflags: none 00:14:43.459 sectype: none 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.459 16:13:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:43.460 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:43.460 16:13:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:44.393 16:13:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:46.290 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:46.291 /dev/nvme0n1 ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.291 rmmod nvme_tcp 00:14:46.291 rmmod nvme_fabrics 00:14:46.291 rmmod nvme_keyring 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 280636 ']' 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 280636 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 280636 ']' 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 280636 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 280636 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 280636' 00:14:46.291 killing process with pid 280636 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 280636 00:14:46.291 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 280636 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.550 16:13:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.085 16:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.085 00:14:49.085 real 0m7.998s 00:14:49.085 user 0m14.424s 00:14:49.085 sys 0m2.218s 00:14:49.085 16:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.085 16:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.085 ************************************ 00:14:49.085 END TEST nvmf_nvme_cli 00:14:49.085 ************************************ 00:14:49.085 16:13:31 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:49.085 16:13:31 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.085 16:13:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.085 16:13:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.085 16:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.085 ************************************ 00:14:49.085 START TEST nvmf_vfio_user 00:14:49.085 ************************************ 00:14:49.085 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.085 * Looking for test storage... 00:14:49.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.085 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.085 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:49.085 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=281520 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 281520' 00:14:49.086 Process pid: 281520 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 281520 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 281520 ']' 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.086 [2024-07-15 16:13:31.731832] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:49.086 [2024-07-15 16:13:31.731928] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.086 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.086 [2024-07-15 16:13:31.791755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.086 [2024-07-15 16:13:31.879289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.086 [2024-07-15 16:13:31.879355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.086 [2024-07-15 16:13:31.879369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.086 [2024-07-15 16:13:31.879387] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.086 [2024-07-15 16:13:31.879412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.086 [2024-07-15 16:13:31.879468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.086 [2024-07-15 16:13:31.879522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.086 [2024-07-15 16:13:31.879524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.086 [2024-07-15 16:13:31.879498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:49.086 16:13:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:50.460 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:50.716 Malloc1 00:14:50.716 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:50.973 16:13:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:51.231 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:51.488 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.488 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:51.488 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:51.745 Malloc2 00:14:51.745 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:52.002 16:13:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:52.259 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.518 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:52.518 [2024-07-15 16:13:35.307562] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:52.518 [2024-07-15 16:13:35.307599] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281971 ] 00:14:52.518 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.518 [2024-07-15 16:13:35.341891] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:52.518 [2024-07-15 16:13:35.350533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.518 [2024-07-15 16:13:35.350560] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f645009c000 00:14:52.518 [2024-07-15 16:13:35.351527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.352517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.353522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.354533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.355529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.356531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.357539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.358541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.518 [2024-07-15 16:13:35.359544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.518 [2024-07-15 16:13:35.359564] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f644ee4e000 00:14:52.518 [2024-07-15 16:13:35.360705] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.518 [2024-07-15 16:13:35.375387] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:52.518 [2024-07-15 16:13:35.375420] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:52.518 [2024-07-15 16:13:35.380682] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.518 [2024-07-15 16:13:35.380753] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:52.518 [2024-07-15 16:13:35.380848] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:52.518 [2024-07-15 16:13:35.380880] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:52.518 [2024-07-15 16:13:35.380891] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:52.519 [2024-07-15 16:13:35.381680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:52.519 [2024-07-15 16:13:35.381704] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:52.519 [2024-07-15 16:13:35.381731] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:52.519 [2024-07-15 16:13:35.382686] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.519 [2024-07-15 16:13:35.382705] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:52.519 [2024-07-15 16:13:35.382718] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.383691] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:52.519 [2024-07-15 16:13:35.383709] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.384695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:52.519 [2024-07-15 16:13:35.384713] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:52.519 [2024-07-15 16:13:35.384742] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.384756] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.384865] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:52.519 [2024-07-15 16:13:35.384873] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.384882] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:52.519 [2024-07-15 16:13:35.386748] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:52.519 [2024-07-15 16:13:35.387729] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:52.519 [2024-07-15 16:13:35.388735] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.519 [2024-07-15 16:13:35.389716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.519 [2024-07-15 16:13:35.389839] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.519 [2024-07-15 16:13:35.390758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:52.519 [2024-07-15 16:13:35.390775] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.519 [2024-07-15 16:13:35.390784] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.390809] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:52.519 [2024-07-15 16:13:35.390827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.390859] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.519 [2024-07-15 16:13:35.390868] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.519 [2024-07-15 16:13:35.390892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.390943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.390964] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:52.519 [2024-07-15 16:13:35.390973] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:52.519 [2024-07-15 16:13:35.390981] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:52.519 [2024-07-15 16:13:35.390989] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:52.519 [2024-07-15 16:13:35.390996] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:52.519 [2024-07-15 16:13:35.391004] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:52.519 [2024-07-15 16:13:35.391038] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391051] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.519 [2024-07-15 16:13:35.391110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.519 [2024-07-15 16:13:35.391121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.519 [2024-07-15 16:13:35.391131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.519 [2024-07-15 16:13:35.391139] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391153] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391189] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:52.519 [2024-07-15 16:13:35.391197] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391207] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391222] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391314] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391329] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391342] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:52.519 [2024-07-15 16:13:35.391350] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:52.519 [2024-07-15 16:13:35.391359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391392] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:52.519 [2024-07-15 16:13:35.391408] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391421] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391432] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.519 [2024-07-15 16:13:35.391440] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.519 [2024-07-15 16:13:35.391449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391488] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391502] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391514] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.519 [2024-07-15 16:13:35.391521] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.519 [2024-07-15 16:13:35.391530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391568] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391581] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391592] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391600] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391608] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.519 [2024-07-15 16:13:35.391619] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:52.519 [2024-07-15 16:13:35.391628] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:52.519 [2024-07-15 16:13:35.391657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:52.519 [2024-07-15 16:13:35.391691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:52.519 [2024-07-15 16:13:35.391702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:52.520 [2024-07-15 16:13:35.391761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.520 [2024-07-15 16:13:35.391792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391810] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:52.520 [2024-07-15 16:13:35.391818] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:52.520 [2024-07-15 16:13:35.391825] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:52.520 [2024-07-15 16:13:35.391831] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:52.520 [2024-07-15 16:13:35.391840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:52.520 [2024-07-15 16:13:35.391851] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:52.520 [2024-07-15 16:13:35.391859] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:52.520 [2024-07-15 16:13:35.391868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:52.520 [2024-07-15 16:13:35.391878] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:52.520 [2024-07-15 16:13:35.391886] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.520 [2024-07-15 16:13:35.391894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.520 [2024-07-15 16:13:35.391906] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:52.520 [2024-07-15 16:13:35.391913] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:52.520 [2024-07-15 16:13:35.391922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:52.520 [2024-07-15 16:13:35.391932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:52.520 [2024-07-15 16:13:35.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:52.520 ===================================================== 00:14:52.520 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.520 ===================================================== 00:14:52.520 Controller Capabilities/Features 00:14:52.520 ================================ 00:14:52.520 Vendor ID: 4e58 00:14:52.520 Subsystem Vendor ID: 4e58 00:14:52.520 Serial Number: SPDK1 00:14:52.520 Model Number: SPDK bdev Controller 00:14:52.520 Firmware Version: 24.05.1 00:14:52.520 Recommended Arb Burst: 6 00:14:52.520 IEEE OUI Identifier: 8d 6b 50 00:14:52.520 Multi-path I/O 00:14:52.520 May have multiple subsystem ports: Yes 00:14:52.520 May have multiple controllers: Yes 00:14:52.520 Associated with SR-IOV VF: No 00:14:52.520 Max Data Transfer Size: 131072 00:14:52.520 Max Number of Namespaces: 32 00:14:52.520 Max Number of I/O Queues: 127 00:14:52.520 NVMe Specification Version (VS): 1.3 00:14:52.520 NVMe Specification Version (Identify): 1.3 00:14:52.520 Maximum Queue Entries: 256 00:14:52.520 Contiguous Queues Required: Yes 00:14:52.520 Arbitration Mechanisms Supported 00:14:52.520 Weighted Round Robin: Not Supported 00:14:52.520 Vendor Specific: Not Supported 00:14:52.520 Reset Timeout: 15000 ms 00:14:52.520 Doorbell Stride: 4 bytes 00:14:52.520 NVM Subsystem Reset: Not Supported 00:14:52.520 Command Sets Supported 00:14:52.520 NVM Command Set: Supported 00:14:52.520 Boot Partition: Not Supported 00:14:52.520 Memory Page Size Minimum: 4096 bytes 00:14:52.520 Memory Page Size Maximum: 4096 bytes 00:14:52.520 Persistent Memory Region: Not Supported 00:14:52.520 Optional Asynchronous Events Supported 00:14:52.520 Namespace Attribute Notices: Supported 00:14:52.520 Firmware Activation Notices: Not Supported 00:14:52.520 ANA Change Notices: Not Supported 00:14:52.520 PLE Aggregate Log Change Notices: Not Supported 00:14:52.520 LBA Status Info Alert Notices: Not Supported 00:14:52.520 EGE Aggregate Log Change Notices: Not Supported 00:14:52.520 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.520 Zone Descriptor Change Notices: Not Supported 00:14:52.520 Discovery Log Change Notices: Not Supported 00:14:52.520 Controller Attributes 00:14:52.520 128-bit Host Identifier: Supported 00:14:52.520 Non-Operational Permissive Mode: Not Supported 00:14:52.520 NVM Sets: Not Supported 00:14:52.520 Read Recovery Levels: Not Supported 00:14:52.520 Endurance Groups: Not Supported 00:14:52.520 Predictable Latency Mode: Not Supported 00:14:52.520 Traffic Based Keep ALive: Not Supported 00:14:52.520 Namespace Granularity: Not Supported 00:14:52.520 SQ Associations: Not Supported 00:14:52.520 UUID List: Not Supported 00:14:52.520 Multi-Domain Subsystem: Not Supported 00:14:52.520 Fixed Capacity Management: Not Supported 00:14:52.520 Variable Capacity Management: Not Supported 00:14:52.520 Delete Endurance Group: Not Supported 00:14:52.520 Delete NVM Set: Not Supported 00:14:52.520 Extended LBA Formats Supported: Not Supported 00:14:52.520 Flexible Data Placement Supported: Not Supported 00:14:52.520 00:14:52.520 Controller Memory Buffer Support 00:14:52.520 ================================ 00:14:52.520 Supported: No 00:14:52.520 00:14:52.520 Persistent Memory Region Support 00:14:52.520 ================================ 00:14:52.520 Supported: No 00:14:52.520 00:14:52.520 Admin Command Set Attributes 00:14:52.520 ============================ 00:14:52.520 Security Send/Receive: Not Supported 00:14:52.520 Format NVM: Not Supported 00:14:52.520 Firmware Activate/Download: Not Supported 00:14:52.520 Namespace Management: Not Supported 00:14:52.520 Device Self-Test: Not Supported 00:14:52.520 Directives: Not Supported 00:14:52.520 NVMe-MI: Not Supported 00:14:52.520 Virtualization Management: Not Supported 00:14:52.520 Doorbell Buffer Config: Not Supported 00:14:52.520 Get LBA Status Capability: Not Supported 00:14:52.520 Command & Feature Lockdown Capability: Not Supported 00:14:52.520 Abort Command Limit: 4 00:14:52.520 Async Event Request Limit: 4 00:14:52.520 Number of Firmware Slots: N/A 00:14:52.520 Firmware Slot 1 Read-Only: N/A 00:14:52.520 Firmware Activation Without Reset: N/A 00:14:52.520 Multiple Update Detection Support: N/A 00:14:52.520 Firmware Update Granularity: No Information Provided 00:14:52.520 Per-Namespace SMART Log: No 00:14:52.520 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.520 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:52.520 Command Effects Log Page: Supported 00:14:52.520 Get Log Page Extended Data: Supported 00:14:52.520 Telemetry Log Pages: Not Supported 00:14:52.520 Persistent Event Log Pages: Not Supported 00:14:52.520 Supported Log Pages Log Page: May Support 00:14:52.520 Commands Supported & Effects Log Page: Not Supported 00:14:52.520 Feature Identifiers & Effects Log Page:May Support 00:14:52.520 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.520 Data Area 4 for Telemetry Log: Not Supported 00:14:52.520 Error Log Page Entries Supported: 128 00:14:52.520 Keep Alive: Supported 00:14:52.520 Keep Alive Granularity: 10000 ms 00:14:52.520 00:14:52.520 NVM Command Set Attributes 00:14:52.520 ========================== 00:14:52.520 Submission Queue Entry Size 00:14:52.520 Max: 64 00:14:52.520 Min: 64 00:14:52.520 Completion Queue Entry Size 00:14:52.520 Max: 16 00:14:52.520 Min: 16 00:14:52.520 Number of Namespaces: 32 00:14:52.520 Compare Command: Supported 00:14:52.520 Write Uncorrectable Command: Not Supported 00:14:52.520 Dataset Management Command: Supported 00:14:52.520 Write Zeroes Command: Supported 00:14:52.520 Set Features Save Field: Not Supported 00:14:52.520 Reservations: Not Supported 00:14:52.520 Timestamp: Not Supported 00:14:52.520 Copy: Supported 00:14:52.520 Volatile Write Cache: Present 00:14:52.520 Atomic Write Unit (Normal): 1 00:14:52.520 Atomic Write Unit (PFail): 1 00:14:52.520 Atomic Compare & Write Unit: 1 00:14:52.520 Fused Compare & Write: Supported 00:14:52.520 Scatter-Gather List 00:14:52.520 SGL Command Set: Supported (Dword aligned) 00:14:52.520 SGL Keyed: Not Supported 00:14:52.520 SGL Bit Bucket Descriptor: Not Supported 00:14:52.520 SGL Metadata Pointer: Not Supported 00:14:52.520 Oversized SGL: Not Supported 00:14:52.520 SGL Metadata Address: Not Supported 00:14:52.520 SGL Offset: Not Supported 00:14:52.520 Transport SGL Data Block: Not Supported 00:14:52.520 Replay Protected Memory Block: Not Supported 00:14:52.520 00:14:52.520 Firmware Slot Information 00:14:52.520 ========================= 00:14:52.520 Active slot: 1 00:14:52.520 Slot 1 Firmware Revision: 24.05.1 00:14:52.520 00:14:52.520 00:14:52.520 Commands Supported and Effects 00:14:52.520 ============================== 00:14:52.520 Admin Commands 00:14:52.520 -------------- 00:14:52.520 Get Log Page (02h): Supported 00:14:52.520 Identify (06h): Supported 00:14:52.520 Abort (08h): Supported 00:14:52.521 Set Features (09h): Supported 00:14:52.521 Get Features (0Ah): Supported 00:14:52.521 Asynchronous Event Request (0Ch): Supported 00:14:52.521 Keep Alive (18h): Supported 00:14:52.521 I/O Commands 00:14:52.521 ------------ 00:14:52.521 Flush (00h): Supported LBA-Change 00:14:52.521 Write (01h): Supported LBA-Change 00:14:52.521 Read (02h): Supported 00:14:52.521 Compare (05h): Supported 00:14:52.521 Write Zeroes (08h): Supported LBA-Change 00:14:52.521 Dataset Management (09h): Supported LBA-Change 00:14:52.521 Copy (19h): Supported LBA-Change 00:14:52.521 Unknown (79h): Supported LBA-Change 00:14:52.521 Unknown (7Ah): Supported 00:14:52.521 00:14:52.521 Error Log 00:14:52.521 ========= 00:14:52.521 00:14:52.521 Arbitration 00:14:52.521 =========== 00:14:52.521 Arbitration Burst: 1 00:14:52.521 00:14:52.521 Power Management 00:14:52.521 ================ 00:14:52.521 Number of Power States: 1 00:14:52.521 Current Power State: Power State #0 00:14:52.521 Power State #0: 00:14:52.521 Max Power: 0.00 W 00:14:52.521 Non-Operational State: Operational 00:14:52.521 Entry Latency: Not Reported 00:14:52.521 Exit Latency: Not Reported 00:14:52.521 Relative Read Throughput: 0 00:14:52.521 Relative Read Latency: 0 00:14:52.521 Relative Write Throughput: 0 00:14:52.521 Relative Write Latency: 0 00:14:52.521 Idle Power: Not Reported 00:14:52.521 Active Power: Not Reported 00:14:52.521 Non-Operational Permissive Mode: Not Supported 00:14:52.521 00:14:52.521 Health Information 00:14:52.521 ================== 00:14:52.521 Critical Warnings: 00:14:52.521 Available Spare Space: OK 00:14:52.521 Temperature: OK 00:14:52.521 Device Reliability: OK 00:14:52.521 Read Only: No 00:14:52.521 Volatile Memory Backup: OK 00:14:52.521 Current Temperature: 0 Kelvin[2024-07-15 16:13:35.392131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:52.521 [2024-07-15 16:13:35.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:52.521 [2024-07-15 16:13:35.392193] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:52.521 [2024-07-15 16:13:35.392209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.521 [2024-07-15 16:13:35.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.521 [2024-07-15 16:13:35.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.521 [2024-07-15 16:13:35.392237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.521 [2024-07-15 16:13:35.395749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.521 [2024-07-15 16:13:35.395772] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:52.521 [2024-07-15 16:13:35.396787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.521 [2024-07-15 16:13:35.396860] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:52.521 [2024-07-15 16:13:35.396875] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:52.521 [2024-07-15 16:13:35.397788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:52.521 [2024-07-15 16:13:35.397811] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:52.521 [2024-07-15 16:13:35.397865] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:52.521 [2024-07-15 16:13:35.399826] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.521 (-273 Celsius) 00:14:52.521 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.521 Available Spare: 0% 00:14:52.521 Available Spare Threshold: 0% 00:14:52.521 Life Percentage Used: 0% 00:14:52.521 Data Units Read: 0 00:14:52.521 Data Units Written: 0 00:14:52.521 Host Read Commands: 0 00:14:52.521 Host Write Commands: 0 00:14:52.521 Controller Busy Time: 0 minutes 00:14:52.521 Power Cycles: 0 00:14:52.521 Power On Hours: 0 hours 00:14:52.521 Unsafe Shutdowns: 0 00:14:52.521 Unrecoverable Media Errors: 0 00:14:52.521 Lifetime Error Log Entries: 0 00:14:52.521 Warning Temperature Time: 0 minutes 00:14:52.521 Critical Temperature Time: 0 minutes 00:14:52.521 00:14:52.521 Number of Queues 00:14:52.521 ================ 00:14:52.521 Number of I/O Submission Queues: 127 00:14:52.521 Number of I/O Completion Queues: 127 00:14:52.521 00:14:52.521 Active Namespaces 00:14:52.521 ================= 00:14:52.521 Namespace ID:1 00:14:52.521 Error Recovery Timeout: Unlimited 00:14:52.521 Command Set Identifier: NVM (00h) 00:14:52.521 Deallocate: Supported 00:14:52.521 Deallocated/Unwritten Error: Not Supported 00:14:52.521 Deallocated Read Value: Unknown 00:14:52.521 Deallocate in Write Zeroes: Not Supported 00:14:52.521 Deallocated Guard Field: 0xFFFF 00:14:52.521 Flush: Supported 00:14:52.521 Reservation: Supported 00:14:52.521 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.521 Size (in LBAs): 131072 (0GiB) 00:14:52.521 Capacity (in LBAs): 131072 (0GiB) 00:14:52.521 Utilization (in LBAs): 131072 (0GiB) 00:14:52.521 NGUID: 27DC491A98FA433E8C944EF8F969EA34 00:14:52.521 UUID: 27dc491a-98fa-433e-8c94-4ef8f969ea34 00:14:52.521 Thin Provisioning: Not Supported 00:14:52.521 Per-NS Atomic Units: Yes 00:14:52.521 Atomic Boundary Size (Normal): 0 00:14:52.521 Atomic Boundary Size (PFail): 0 00:14:52.521 Atomic Boundary Offset: 0 00:14:52.521 Maximum Single Source Range Length: 65535 00:14:52.521 Maximum Copy Length: 65535 00:14:52.521 Maximum Source Range Count: 1 00:14:52.521 NGUID/EUI64 Never Reused: No 00:14:52.521 Namespace Write Protected: No 00:14:52.521 Number of LBA Formats: 1 00:14:52.521 Current LBA Format: LBA Format #00 00:14:52.521 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.521 00:14:52.521 16:13:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:52.521 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.779 [2024-07-15 16:13:35.630552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.069 Initializing NVMe Controllers 00:14:58.069 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:58.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:58.069 Initialization complete. Launching workers. 00:14:58.069 ======================================================== 00:14:58.069 Latency(us) 00:14:58.069 Device Information : IOPS MiB/s Average min max 00:14:58.069 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36336.10 141.94 3522.04 1155.06 7460.61 00:14:58.069 ======================================================== 00:14:58.069 Total : 36336.10 141.94 3522.04 1155.06 7460.61 00:14:58.069 00:14:58.069 [2024-07-15 16:13:40.653054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.069 16:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:58.069 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.069 [2024-07-15 16:13:40.895246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.349 Initializing NVMe Controllers 00:15:03.349 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:03.349 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:03.349 Initialization complete. Launching workers. 00:15:03.349 ======================================================== 00:15:03.349 Latency(us) 00:15:03.349 Device Information : IOPS MiB/s Average min max 00:15:03.349 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.25 6994.29 10997.87 00:15:03.349 ======================================================== 00:15:03.349 Total : 16051.20 62.70 7984.25 6994.29 10997.87 00:15:03.349 00:15:03.349 [2024-07-15 16:13:45.932165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.349 16:13:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:03.349 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.349 [2024-07-15 16:13:46.145255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.612 [2024-07-15 16:13:51.219067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.612 Initializing NVMe Controllers 00:15:08.612 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.612 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:08.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:08.612 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:08.612 Initialization complete. Launching workers. 00:15:08.612 Starting thread on core 2 00:15:08.612 Starting thread on core 3 00:15:08.612 Starting thread on core 1 00:15:08.612 16:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:08.612 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.612 [2024-07-15 16:13:51.527189] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.896 [2024-07-15 16:13:54.597113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.896 Initializing NVMe Controllers 00:15:11.896 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.896 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.896 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:11.896 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:11.896 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:11.896 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:11.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:11.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:11.896 Initialization complete. Launching workers. 00:15:11.896 Starting thread on core 1 with urgent priority queue 00:15:11.896 Starting thread on core 2 with urgent priority queue 00:15:11.896 Starting thread on core 3 with urgent priority queue 00:15:11.896 Starting thread on core 0 with urgent priority queue 00:15:11.896 SPDK bdev Controller (SPDK1 ) core 0: 5023.67 IO/s 19.91 secs/100000 ios 00:15:11.896 SPDK bdev Controller (SPDK1 ) core 1: 5227.67 IO/s 19.13 secs/100000 ios 00:15:11.896 SPDK bdev Controller (SPDK1 ) core 2: 4998.33 IO/s 20.01 secs/100000 ios 00:15:11.896 SPDK bdev Controller (SPDK1 ) core 3: 4737.33 IO/s 21.11 secs/100000 ios 00:15:11.896 ======================================================== 00:15:11.896 00:15:11.897 16:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.897 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.155 [2024-07-15 16:13:54.889247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.155 Initializing NVMe Controllers 00:15:12.155 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.155 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.155 Namespace ID: 1 size: 0GB 00:15:12.155 Initialization complete. 00:15:12.155 INFO: using host memory buffer for IO 00:15:12.155 Hello world! 00:15:12.155 [2024-07-15 16:13:54.923806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.155 16:13:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:12.155 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.413 [2024-07-15 16:13:55.213210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.347 Initializing NVMe Controllers 00:15:13.347 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.347 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.347 Initialization complete. Launching workers. 00:15:13.347 submit (in ns) avg, min, max = 5679.8, 3478.9, 4016348.9 00:15:13.347 complete (in ns) avg, min, max = 24537.3, 2061.1, 4015221.1 00:15:13.347 00:15:13.347 Submit histogram 00:15:13.347 ================ 00:15:13.347 Range in us Cumulative Count 00:15:13.347 3.461 - 3.484: 0.0073% ( 1) 00:15:13.347 3.484 - 3.508: 0.1963% ( 26) 00:15:13.347 3.508 - 3.532: 0.9963% ( 110) 00:15:13.347 3.532 - 3.556: 2.8507% ( 255) 00:15:13.347 3.556 - 3.579: 7.7013% ( 667) 00:15:13.347 3.579 - 3.603: 15.1407% ( 1023) 00:15:13.347 3.603 - 3.627: 24.1582% ( 1240) 00:15:13.347 3.627 - 3.650: 32.5794% ( 1158) 00:15:13.347 3.650 - 3.674: 40.6225% ( 1106) 00:15:13.347 3.674 - 3.698: 48.0038% ( 1015) 00:15:13.347 3.698 - 3.721: 54.9924% ( 961) 00:15:13.347 3.721 - 3.745: 59.0793% ( 562) 00:15:13.347 3.745 - 3.769: 62.6282% ( 488) 00:15:13.347 3.769 - 3.793: 65.6025% ( 409) 00:15:13.347 3.793 - 3.816: 69.2968% ( 508) 00:15:13.347 3.816 - 3.840: 73.1074% ( 524) 00:15:13.347 3.840 - 3.864: 77.1362% ( 554) 00:15:13.347 3.864 - 3.887: 80.8959% ( 517) 00:15:13.347 3.887 - 3.911: 84.3357% ( 473) 00:15:13.347 3.911 - 3.935: 86.6410% ( 317) 00:15:13.347 3.935 - 3.959: 88.4372% ( 247) 00:15:13.347 3.959 - 3.982: 89.8335% ( 192) 00:15:13.347 3.982 - 4.006: 91.1570% ( 182) 00:15:13.347 4.006 - 4.030: 92.1242% ( 133) 00:15:13.347 4.030 - 4.053: 92.9678% ( 116) 00:15:13.347 4.053 - 4.077: 93.7968% ( 114) 00:15:13.347 4.077 - 4.101: 94.5749% ( 107) 00:15:13.347 4.101 - 4.124: 95.2513% ( 93) 00:15:13.347 4.124 - 4.148: 95.8476% ( 82) 00:15:13.347 4.148 - 4.172: 96.2985% ( 62) 00:15:13.347 4.172 - 4.196: 96.5239% ( 31) 00:15:13.347 4.196 - 4.219: 96.7057% ( 25) 00:15:13.347 4.219 - 4.243: 96.8584% ( 21) 00:15:13.347 4.243 - 4.267: 96.9457% ( 12) 00:15:13.347 4.267 - 4.290: 97.0838% ( 19) 00:15:13.347 4.290 - 4.314: 97.2075% ( 17) 00:15:13.347 4.314 - 4.338: 97.2802% ( 10) 00:15:13.347 4.338 - 4.361: 97.3384% ( 8) 00:15:13.347 4.361 - 4.385: 97.3747% ( 5) 00:15:13.347 4.385 - 4.409: 97.4256% ( 7) 00:15:13.347 4.409 - 4.433: 97.4547% ( 4) 00:15:13.347 4.433 - 4.456: 97.4765% ( 3) 00:15:13.347 4.456 - 4.480: 97.4911% ( 2) 00:15:13.347 4.480 - 4.504: 97.5202% ( 4) 00:15:13.347 4.504 - 4.527: 97.5347% ( 2) 00:15:13.347 4.527 - 4.551: 97.5420% ( 1) 00:15:13.347 4.599 - 4.622: 97.5638% ( 3) 00:15:13.347 4.622 - 4.646: 97.5856% ( 3) 00:15:13.347 4.646 - 4.670: 97.6002% ( 2) 00:15:13.347 4.670 - 4.693: 97.6438% ( 6) 00:15:13.347 4.693 - 4.717: 97.7093% ( 9) 00:15:13.347 4.717 - 4.741: 97.7383% ( 4) 00:15:13.347 4.741 - 4.764: 97.8183% ( 11) 00:15:13.347 4.764 - 4.788: 97.8474% ( 4) 00:15:13.347 4.788 - 4.812: 97.8765% ( 4) 00:15:13.347 4.812 - 4.836: 97.8911% ( 2) 00:15:13.347 4.836 - 4.859: 97.9202% ( 4) 00:15:13.347 4.859 - 4.883: 98.0001% ( 11) 00:15:13.347 4.883 - 4.907: 98.0729% ( 10) 00:15:13.347 4.907 - 4.930: 98.1165% ( 6) 00:15:13.347 4.930 - 4.954: 98.1310% ( 2) 00:15:13.347 4.954 - 4.978: 98.1456% ( 2) 00:15:13.347 4.978 - 5.001: 98.1674% ( 3) 00:15:13.347 5.001 - 5.025: 98.1892% ( 3) 00:15:13.347 5.025 - 5.049: 98.2110% ( 3) 00:15:13.347 5.049 - 5.073: 98.2183% ( 1) 00:15:13.347 5.073 - 5.096: 98.2329% ( 2) 00:15:13.347 5.096 - 5.120: 98.2619% ( 4) 00:15:13.347 5.120 - 5.144: 98.2692% ( 1) 00:15:13.347 5.144 - 5.167: 98.2838% ( 2) 00:15:13.347 5.191 - 5.215: 98.2983% ( 2) 00:15:13.347 5.215 - 5.239: 98.3128% ( 2) 00:15:13.347 5.333 - 5.357: 98.3347% ( 3) 00:15:13.347 5.357 - 5.381: 98.3419% ( 1) 00:15:13.347 5.523 - 5.547: 98.3492% ( 1) 00:15:13.347 5.641 - 5.665: 98.3565% ( 1) 00:15:13.347 5.784 - 5.807: 98.3638% ( 1) 00:15:13.347 5.807 - 5.831: 98.3710% ( 1) 00:15:13.347 5.831 - 5.855: 98.3783% ( 1) 00:15:13.347 6.068 - 6.116: 98.3856% ( 1) 00:15:13.347 6.116 - 6.163: 98.3928% ( 1) 00:15:13.347 6.163 - 6.210: 98.4074% ( 2) 00:15:13.347 6.353 - 6.400: 98.4147% ( 1) 00:15:13.347 6.447 - 6.495: 98.4219% ( 1) 00:15:13.347 6.637 - 6.684: 98.4292% ( 1) 00:15:13.347 6.732 - 6.779: 98.4365% ( 1) 00:15:13.347 6.874 - 6.921: 98.4437% ( 1) 00:15:13.347 6.921 - 6.969: 98.4510% ( 1) 00:15:13.347 7.064 - 7.111: 98.4583% ( 1) 00:15:13.347 7.159 - 7.206: 98.4656% ( 1) 00:15:13.347 7.206 - 7.253: 98.4728% ( 1) 00:15:13.347 7.348 - 7.396: 98.4801% ( 1) 00:15:13.347 7.396 - 7.443: 98.4874% ( 1) 00:15:13.347 7.443 - 7.490: 98.5019% ( 2) 00:15:13.347 7.490 - 7.538: 98.5237% ( 3) 00:15:13.347 7.585 - 7.633: 98.5310% ( 1) 00:15:13.347 7.727 - 7.775: 98.5456% ( 2) 00:15:13.347 7.822 - 7.870: 98.5601% ( 2) 00:15:13.347 7.870 - 7.917: 98.5674% ( 1) 00:15:13.347 7.964 - 8.012: 98.5746% ( 1) 00:15:13.347 8.059 - 8.107: 98.5819% ( 1) 00:15:13.347 8.249 - 8.296: 98.6037% ( 3) 00:15:13.347 8.296 - 8.344: 98.6183% ( 2) 00:15:13.347 8.391 - 8.439: 98.6256% ( 1) 00:15:13.347 8.486 - 8.533: 98.6401% ( 2) 00:15:13.347 8.628 - 8.676: 98.6474% ( 1) 00:15:13.347 8.723 - 8.770: 98.6546% ( 1) 00:15:13.347 8.770 - 8.818: 98.6692% ( 2) 00:15:13.347 8.913 - 8.960: 98.6765% ( 1) 00:15:13.347 9.055 - 9.102: 98.6910% ( 2) 00:15:13.347 9.102 - 9.150: 98.6983% ( 1) 00:15:13.347 9.150 - 9.197: 98.7055% ( 1) 00:15:13.347 9.197 - 9.244: 98.7201% ( 2) 00:15:13.347 9.244 - 9.292: 98.7274% ( 1) 00:15:13.347 9.339 - 9.387: 98.7346% ( 1) 00:15:13.347 9.481 - 9.529: 98.7419% ( 1) 00:15:13.347 9.576 - 9.624: 98.7565% ( 2) 00:15:13.347 10.003 - 10.050: 98.7637% ( 1) 00:15:13.347 10.050 - 10.098: 98.7710% ( 1) 00:15:13.347 10.430 - 10.477: 98.7783% ( 1) 00:15:13.347 10.809 - 10.856: 98.7855% ( 1) 00:15:13.347 11.188 - 11.236: 98.8001% ( 2) 00:15:13.347 11.330 - 11.378: 98.8074% ( 1) 00:15:13.347 11.757 - 11.804: 98.8146% ( 1) 00:15:13.347 11.899 - 11.947: 98.8219% ( 1) 00:15:13.347 11.994 - 12.041: 98.8292% ( 1) 00:15:13.347 12.231 - 12.326: 98.8364% ( 1) 00:15:13.347 12.326 - 12.421: 98.8437% ( 1) 00:15:13.347 12.610 - 12.705: 98.8510% ( 1) 00:15:13.347 12.705 - 12.800: 98.8583% ( 1) 00:15:13.347 12.800 - 12.895: 98.8655% ( 1) 00:15:13.347 12.990 - 13.084: 98.8728% ( 1) 00:15:13.347 13.084 - 13.179: 98.8801% ( 1) 00:15:13.347 13.179 - 13.274: 98.9019% ( 3) 00:15:13.347 13.274 - 13.369: 98.9164% ( 2) 00:15:13.347 13.464 - 13.559: 98.9237% ( 1) 00:15:13.347 13.559 - 13.653: 98.9310% ( 1) 00:15:13.347 13.653 - 13.748: 98.9383% ( 1) 00:15:13.347 14.222 - 14.317: 98.9528% ( 2) 00:15:13.347 14.317 - 14.412: 98.9601% ( 1) 00:15:13.347 14.412 - 14.507: 98.9673% ( 1) 00:15:13.347 14.507 - 14.601: 98.9746% ( 1) 00:15:13.347 14.601 - 14.696: 98.9819% ( 1) 00:15:13.347 14.696 - 14.791: 98.9892% ( 1) 00:15:13.347 14.791 - 14.886: 98.9964% ( 1) 00:15:13.347 16.782 - 16.877: 99.0037% ( 1) 00:15:13.347 17.067 - 17.161: 99.0183% ( 2) 00:15:13.347 17.256 - 17.351: 99.0401% ( 3) 00:15:13.347 17.351 - 17.446: 99.0619% ( 3) 00:15:13.347 17.446 - 17.541: 99.0910% ( 4) 00:15:13.347 17.541 - 17.636: 99.1492% ( 8) 00:15:13.347 17.636 - 17.730: 99.1928% ( 6) 00:15:13.347 17.730 - 17.825: 99.2655% ( 10) 00:15:13.347 17.825 - 17.920: 99.3237% ( 8) 00:15:13.347 17.920 - 18.015: 99.3673% ( 6) 00:15:13.347 18.015 - 18.110: 99.4328% ( 9) 00:15:13.347 18.110 - 18.204: 99.4691% ( 5) 00:15:13.347 18.204 - 18.299: 99.5055% ( 5) 00:15:13.347 18.299 - 18.394: 99.5855% ( 11) 00:15:13.347 18.394 - 18.489: 99.6509% ( 9) 00:15:13.347 18.489 - 18.584: 99.6800% ( 4) 00:15:13.347 18.584 - 18.679: 99.7164% ( 5) 00:15:13.347 18.679 - 18.773: 99.7309% ( 2) 00:15:13.347 18.773 - 18.868: 99.7600% ( 4) 00:15:13.347 18.868 - 18.963: 99.7964% ( 5) 00:15:13.347 18.963 - 19.058: 99.8109% ( 2) 00:15:13.347 19.058 - 19.153: 99.8473% ( 5) 00:15:13.347 19.153 - 19.247: 99.8546% ( 1) 00:15:13.347 19.247 - 19.342: 99.8691% ( 2) 00:15:13.347 19.532 - 19.627: 99.8764% ( 1) 00:15:13.347 19.721 - 19.816: 99.8836% ( 1) 00:15:13.347 20.101 - 20.196: 99.8909% ( 1) 00:15:13.347 20.480 - 20.575: 99.8982% ( 1) 00:15:13.347 23.609 - 23.704: 99.9055% ( 1) 00:15:13.347 24.273 - 24.462: 99.9127% ( 1) 00:15:13.348 25.410 - 25.600: 99.9273% ( 2) 00:15:13.348 25.979 - 26.169: 99.9346% ( 1) 00:15:13.348 26.359 - 26.548: 99.9418% ( 1) 00:15:13.348 27.307 - 27.496: 99.9491% ( 1) 00:15:13.348 30.720 - 30.910: 99.9564% ( 1) 00:15:13.348 3980.705 - 4004.978: 99.9927% ( 5) 00:15:13.348 4004.978 - 4029.250: 100.0000% ( 1) 00:15:13.348 00:15:13.348 Complete histogram 00:15:13.348 ================== 00:15:13.348 Range in us Cumulative Count 00:15:13.348 2.050 - 2.062: 0.0145% ( 2) 00:15:13.348 2.062 - 2.074: 22.8711% ( 3143) 00:15:13.348 2.074 - 2.086: 42.7896% ( 2739) 00:15:13.348 2.086 - 2.098: 44.7458% ( 269) 00:15:13.348 2.098 - 2.110: 55.8723% ( 1530) 00:15:13.348 2.110 - 2.121: 60.8901% ( 690) 00:15:13.348 2.121 - 2.133: 62.9918% ( 289) 00:15:13.348 2.133 - 2.145: 73.4710% ( 1441) 00:15:13.348 2.145 - 2.157: 76.8380% ( 463) 00:15:13.348 2.157 - 2.169: 78.1180% ( 176) 00:15:13.348 2.169 - 2.181: 81.6232% ( 482) 00:15:13.348 2.181 - 2.193: 82.9540% ( 183) 00:15:13.348 2.193 - 2.204: 83.7466% ( 109) 00:15:13.348 2.204 - 2.216: 87.7100% ( 545) 00:15:13.348 2.216 - 2.228: 90.1825% ( 340) 00:15:13.348 2.228 - 2.240: 91.8188% ( 225) 00:15:13.348 2.240 - 2.252: 93.6732% ( 255) 00:15:13.348 2.252 - 2.264: 94.2768% ( 83) 00:15:13.348 2.264 - 2.276: 94.4586% ( 25) 00:15:13.348 2.276 - 2.287: 94.8367% ( 52) 00:15:13.348 2.287 - 2.299: 95.2440% ( 56) 00:15:13.348 2.299 - 2.311: 95.6876% ( 61) 00:15:13.348 2.311 - 2.323: 95.9058% ( 30) 00:15:13.348 2.323 - 2.335: 95.9276% ( 3) 00:15:13.348 2.335 - 2.347: 95.9930% ( 9) 00:15:13.348 2.347 - 2.359: 96.1021% ( 15) 00:15:13.348 2.359 - 2.370: 96.2912% ( 26) 00:15:13.348 2.370 - 2.382: 96.5675% ( 38) 00:15:13.348 2.382 - 2.394: 96.9384% ( 51) 00:15:13.348 2.394 - 2.406: 97.2875% ( 48) 00:15:13.348 2.406 - 2.418: 97.4984% ( 29) 00:15:13.348 2.418 - 2.430: 97.6874% ( 26) 00:15:13.348 2.430 - 2.441: 97.8256% ( 19) 00:15:13.348 2.441 - 2.453: 97.9274% ( 14) 00:15:13.348 2.453 - 2.465: 98.0147% ( 12) 00:15:13.348 2.465 - 2.477: 98.1020% ( 12) 00:15:13.348 2.477 - 2.489: 98.2110% ( 15) 00:15:13.348 2.489 - 2.501: 98.2910% ( 11) 00:15:13.348 2.501 - 2.513: 98.3492% ( 8) 00:15:13.348 2.513 - 2.524: 98.3856% ( 5) 00:15:13.348 2.524 - 2.536: 98.4074% ( 3) 00:15:13.348 2.536 - 2.548: 98.4219% ( 2) 00:15:13.348 2.548 - 2.560: 98.4292% ( 1) 00:15:13.348 2.560 - 2.572: 98.4437% ( 2) 00:15:13.348 2.607 - 2.619: 98.4510% ( 1) 00:15:13.348 2.619 - 2.631: 98.4728% ( 3) 00:15:13.348 2.631 - 2.643: 98.4801% ( 1) 00:15:13.348 2.714 - 2.726: 98.4874% ( 1) 00:15:13.348 2.785 - 2.797: 98.4947% ( 1) 00:15:13.348 3.153 - 3.176: 98.5019% ( 1) 00:15:13.348 3.319 - 3.342: 98.5092% ( 1) 00:15:13.348 3.366 - 3.390: 98.5237% ( 2) 00:15:13.348 3.390 - 3.413: 98.5456% ( 3) 00:15:13.348 3.413 - 3.437: 98.5528% ( 1) 00:15:13.348 3.437 - 3.461: 98.5674% ( 2) 00:15:13.348 3.484 - 3.508: 98.5965% ( 4) 00:15:13.348 3.508 - 3.532: 98.6110% ( 2) 00:15:13.348 3.532 - 3.556: 98.6183% ( 1) 00:15:13.348 3.579 - 3.603: 98.6401% ( 3) 00:15:13.348 3.627 - 3.650: 98.6546% ( 2) 00:15:13.348 3.650 - 3.674: 98.6619% ( 1) 00:15:13.348 3.674 - 3.698: 98.6837% ( 3) 00:15:13.348 3.698 - 3.721: 98.6910% ( 1) 00:15:13.348 3.745 - 3.769: 98.7055% ( 2) 00:15:13.348 3.769 - 3.793: 98.7201% ( 2) 00:15:13.348 3.793 - 3.816: 98.7274% ( 1) 00:15:13.348 3.816 - 3.840: 98.7346% ( 1) 00:15:13.348 3.864 - 3.887: 98.7492% ( 2) 00:15:13.348 3.887 - 3.911: 98.7565% ( 1) 00:15:13.348 3.935 - 3.959: 9[2024-07-15 16:13:56.238460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.348 8.7637% ( 1) 00:15:13.348 5.049 - 5.073: 98.7710% ( 1) 00:15:13.348 5.239 - 5.262: 98.7783% ( 1) 00:15:13.348 5.262 - 5.286: 98.7855% ( 1) 00:15:13.348 5.570 - 5.594: 98.7928% ( 1) 00:15:13.348 5.784 - 5.807: 98.8001% ( 1) 00:15:13.348 5.973 - 5.997: 98.8074% ( 1) 00:15:13.348 6.068 - 6.116: 98.8146% ( 1) 00:15:13.348 6.116 - 6.163: 98.8219% ( 1) 00:15:13.348 6.163 - 6.210: 98.8292% ( 1) 00:15:13.348 6.305 - 6.353: 98.8364% ( 1) 00:15:13.348 6.447 - 6.495: 98.8510% ( 2) 00:15:13.348 6.590 - 6.637: 98.8583% ( 1) 00:15:13.348 6.684 - 6.732: 98.8655% ( 1) 00:15:13.348 7.064 - 7.111: 98.8728% ( 1) 00:15:13.348 7.727 - 7.775: 98.8801% ( 1) 00:15:13.348 7.822 - 7.870: 98.8874% ( 1) 00:15:13.348 9.055 - 9.102: 98.8946% ( 1) 00:15:13.348 11.567 - 11.615: 98.9019% ( 1) 00:15:13.348 15.265 - 15.360: 98.9092% ( 1) 00:15:13.348 15.550 - 15.644: 98.9164% ( 1) 00:15:13.348 15.644 - 15.739: 98.9383% ( 3) 00:15:13.348 15.739 - 15.834: 98.9455% ( 1) 00:15:13.348 15.834 - 15.929: 98.9601% ( 2) 00:15:13.348 15.929 - 16.024: 98.9892% ( 4) 00:15:13.348 16.024 - 16.119: 98.9964% ( 1) 00:15:13.348 16.119 - 16.213: 99.0255% ( 4) 00:15:13.348 16.213 - 16.308: 99.0764% ( 7) 00:15:13.348 16.308 - 16.403: 99.1273% ( 7) 00:15:13.348 16.403 - 16.498: 99.1564% ( 4) 00:15:13.348 16.498 - 16.593: 99.2001% ( 6) 00:15:13.348 16.593 - 16.687: 99.2146% ( 2) 00:15:13.348 16.687 - 16.782: 99.2582% ( 6) 00:15:13.348 16.782 - 16.877: 99.3237% ( 9) 00:15:13.348 16.877 - 16.972: 99.3310% ( 1) 00:15:13.348 16.972 - 17.067: 99.3455% ( 2) 00:15:13.348 17.161 - 17.256: 99.3746% ( 4) 00:15:13.348 17.256 - 17.351: 99.3891% ( 2) 00:15:13.348 17.351 - 17.446: 99.3964% ( 1) 00:15:13.348 17.446 - 17.541: 99.4037% ( 1) 00:15:13.348 17.920 - 18.015: 99.4182% ( 2) 00:15:13.348 18.204 - 18.299: 99.4255% ( 1) 00:15:13.348 20.006 - 20.101: 99.4328% ( 1) 00:15:13.348 20.764 - 20.859: 99.4400% ( 1) 00:15:13.348 3046.210 - 3058.347: 99.4473% ( 1) 00:15:13.348 3980.705 - 4004.978: 99.9346% ( 67) 00:15:13.348 4004.978 - 4029.250: 100.0000% ( 9) 00:15:13.348 00:15:13.348 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:13.348 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:13.348 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:13.348 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:13.348 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.606 [ 00:15:13.606 { 00:15:13.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.606 "subtype": "Discovery", 00:15:13.606 "listen_addresses": [], 00:15:13.606 "allow_any_host": true, 00:15:13.606 "hosts": [] 00:15:13.606 }, 00:15:13.606 { 00:15:13.606 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.606 "subtype": "NVMe", 00:15:13.606 "listen_addresses": [ 00:15:13.606 { 00:15:13.606 "trtype": "VFIOUSER", 00:15:13.606 "adrfam": "IPv4", 00:15:13.606 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.606 "trsvcid": "0" 00:15:13.606 } 00:15:13.606 ], 00:15:13.606 "allow_any_host": true, 00:15:13.606 "hosts": [], 00:15:13.606 "serial_number": "SPDK1", 00:15:13.606 "model_number": "SPDK bdev Controller", 00:15:13.606 "max_namespaces": 32, 00:15:13.606 "min_cntlid": 1, 00:15:13.606 "max_cntlid": 65519, 00:15:13.606 "namespaces": [ 00:15:13.606 { 00:15:13.606 "nsid": 1, 00:15:13.606 "bdev_name": "Malloc1", 00:15:13.606 "name": "Malloc1", 00:15:13.606 "nguid": "27DC491A98FA433E8C944EF8F969EA34", 00:15:13.606 "uuid": "27dc491a-98fa-433e-8c94-4ef8f969ea34" 00:15:13.606 } 00:15:13.606 ] 00:15:13.606 }, 00:15:13.606 { 00:15:13.606 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.606 "subtype": "NVMe", 00:15:13.606 "listen_addresses": [ 00:15:13.606 { 00:15:13.606 "trtype": "VFIOUSER", 00:15:13.606 "adrfam": "IPv4", 00:15:13.606 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.606 "trsvcid": "0" 00:15:13.606 } 00:15:13.606 ], 00:15:13.606 "allow_any_host": true, 00:15:13.606 "hosts": [], 00:15:13.606 "serial_number": "SPDK2", 00:15:13.606 "model_number": "SPDK bdev Controller", 00:15:13.606 "max_namespaces": 32, 00:15:13.606 "min_cntlid": 1, 00:15:13.606 "max_cntlid": 65519, 00:15:13.606 "namespaces": [ 00:15:13.606 { 00:15:13.606 "nsid": 1, 00:15:13.606 "bdev_name": "Malloc2", 00:15:13.606 "name": "Malloc2", 00:15:13.606 "nguid": "BB317FF40F09489388375E1D31B48FF0", 00:15:13.606 "uuid": "bb317ff4-0f09-4893-8837-5e1d31b48ff0" 00:15:13.606 } 00:15:13.606 ] 00:15:13.606 } 00:15:13.606 ] 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=284371 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:13.606 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:13.607 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:13.607 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.882 [2024-07-15 16:13:56.679210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.882 Malloc3 00:15:13.882 16:13:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:14.139 [2024-07-15 16:13:57.040916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.139 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.139 Asynchronous Event Request test 00:15:14.139 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.139 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.139 Registering asynchronous event callbacks... 00:15:14.139 Starting namespace attribute notice tests for all controllers... 00:15:14.139 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:14.139 aer_cb - Changed Namespace 00:15:14.139 Cleaning up... 00:15:14.396 [ 00:15:14.396 { 00:15:14.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.396 "subtype": "Discovery", 00:15:14.396 "listen_addresses": [], 00:15:14.396 "allow_any_host": true, 00:15:14.396 "hosts": [] 00:15:14.396 }, 00:15:14.396 { 00:15:14.396 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.396 "subtype": "NVMe", 00:15:14.396 "listen_addresses": [ 00:15:14.396 { 00:15:14.396 "trtype": "VFIOUSER", 00:15:14.396 "adrfam": "IPv4", 00:15:14.396 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.396 "trsvcid": "0" 00:15:14.396 } 00:15:14.396 ], 00:15:14.396 "allow_any_host": true, 00:15:14.396 "hosts": [], 00:15:14.396 "serial_number": "SPDK1", 00:15:14.396 "model_number": "SPDK bdev Controller", 00:15:14.396 "max_namespaces": 32, 00:15:14.396 "min_cntlid": 1, 00:15:14.396 "max_cntlid": 65519, 00:15:14.396 "namespaces": [ 00:15:14.396 { 00:15:14.396 "nsid": 1, 00:15:14.396 "bdev_name": "Malloc1", 00:15:14.396 "name": "Malloc1", 00:15:14.396 "nguid": "27DC491A98FA433E8C944EF8F969EA34", 00:15:14.396 "uuid": "27dc491a-98fa-433e-8c94-4ef8f969ea34" 00:15:14.396 }, 00:15:14.396 { 00:15:14.396 "nsid": 2, 00:15:14.396 "bdev_name": "Malloc3", 00:15:14.396 "name": "Malloc3", 00:15:14.396 "nguid": "EAB9B533A3ED42519D057AFEAF0690C3", 00:15:14.396 "uuid": "eab9b533-a3ed-4251-9d05-7afeaf0690c3" 00:15:14.396 } 00:15:14.396 ] 00:15:14.396 }, 00:15:14.396 { 00:15:14.396 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.396 "subtype": "NVMe", 00:15:14.396 "listen_addresses": [ 00:15:14.396 { 00:15:14.396 "trtype": "VFIOUSER", 00:15:14.396 "adrfam": "IPv4", 00:15:14.396 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.396 "trsvcid": "0" 00:15:14.396 } 00:15:14.396 ], 00:15:14.396 "allow_any_host": true, 00:15:14.396 "hosts": [], 00:15:14.396 "serial_number": "SPDK2", 00:15:14.396 "model_number": "SPDK bdev Controller", 00:15:14.396 "max_namespaces": 32, 00:15:14.396 "min_cntlid": 1, 00:15:14.396 "max_cntlid": 65519, 00:15:14.396 "namespaces": [ 00:15:14.396 { 00:15:14.396 "nsid": 1, 00:15:14.396 "bdev_name": "Malloc2", 00:15:14.396 "name": "Malloc2", 00:15:14.396 "nguid": "BB317FF40F09489388375E1D31B48FF0", 00:15:14.396 "uuid": "bb317ff4-0f09-4893-8837-5e1d31b48ff0" 00:15:14.396 } 00:15:14.396 ] 00:15:14.396 } 00:15:14.396 ] 00:15:14.396 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 284371 00:15:14.396 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.396 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:14.396 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:14.396 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:14.396 [2024-07-15 16:13:57.325376] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:14.396 [2024-07-15 16:13:57.325426] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284505 ] 00:15:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.396 [2024-07-15 16:13:57.361916] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:14.396 [2024-07-15 16:13:57.370030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.396 [2024-07-15 16:13:57.370082] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f81e5f59000 00:15:14.396 [2024-07-15 16:13:57.371030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.396 [2024-07-15 16:13:57.372044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.396 [2024-07-15 16:13:57.373040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.396 [2024-07-15 16:13:57.374050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.396 [2024-07-15 16:13:57.375053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.396 [2024-07-15 16:13:57.376058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.656 [2024-07-15 16:13:57.377081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:14.656 [2024-07-15 16:13:57.378093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:14.656 [2024-07-15 16:13:57.379105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:14.656 [2024-07-15 16:13:57.379129] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f81e4d0b000 00:15:14.657 [2024-07-15 16:13:57.380294] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.657 [2024-07-15 16:13:57.397087] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:14.657 [2024-07-15 16:13:57.397137] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:14.657 [2024-07-15 16:13:57.399223] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.657 [2024-07-15 16:13:57.399274] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:14.657 [2024-07-15 16:13:57.399359] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:14.657 [2024-07-15 16:13:57.399386] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:14.657 [2024-07-15 16:13:57.399396] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:14.657 [2024-07-15 16:13:57.400226] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:14.657 [2024-07-15 16:13:57.400251] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:14.657 [2024-07-15 16:13:57.400264] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:14.657 [2024-07-15 16:13:57.401233] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:14.657 [2024-07-15 16:13:57.401253] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:14.657 [2024-07-15 16:13:57.401266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.402236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:14.657 [2024-07-15 16:13:57.402255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.403238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:14.657 [2024-07-15 16:13:57.403257] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:14.657 [2024-07-15 16:13:57.403266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.403277] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.403386] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:14.657 [2024-07-15 16:13:57.403393] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.403401] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:14.657 [2024-07-15 16:13:57.404245] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:14.657 [2024-07-15 16:13:57.405256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:14.657 [2024-07-15 16:13:57.406264] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.657 [2024-07-15 16:13:57.407261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.657 [2024-07-15 16:13:57.407334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:14.657 [2024-07-15 16:13:57.408282] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:14.657 [2024-07-15 16:13:57.408301] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:14.657 [2024-07-15 16:13:57.408315] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.408339] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:14.657 [2024-07-15 16:13:57.408356] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.408379] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.657 [2024-07-15 16:13:57.408388] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.657 [2024-07-15 16:13:57.408406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.414752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.414777] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:14.657 [2024-07-15 16:13:57.414788] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:14.657 [2024-07-15 16:13:57.414796] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:14.657 [2024-07-15 16:13:57.414804] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:14.657 [2024-07-15 16:13:57.414811] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:14.657 [2024-07-15 16:13:57.414819] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:14.657 [2024-07-15 16:13:57.414827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.414839] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.414855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.422749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.422772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.657 [2024-07-15 16:13:57.422785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.657 [2024-07-15 16:13:57.422797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.657 [2024-07-15 16:13:57.422808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.657 [2024-07-15 16:13:57.422816] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.422832] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.422847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.430766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.430784] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:14.657 [2024-07-15 16:13:57.430797] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.430809] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.430823] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.430837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.438751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.438834] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.438851] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.438863] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:14.657 [2024-07-15 16:13:57.438871] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:14.657 [2024-07-15 16:13:57.438881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.446765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.446788] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:14.657 [2024-07-15 16:13:57.446804] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.446818] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.446830] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.657 [2024-07-15 16:13:57.446838] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.657 [2024-07-15 16:13:57.446848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.454750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.454778] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.454794] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.454806] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:14.657 [2024-07-15 16:13:57.454814] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.657 [2024-07-15 16:13:57.454824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.657 [2024-07-15 16:13:57.462761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:14.657 [2024-07-15 16:13:57.462782] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.462799] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:14.657 [2024-07-15 16:13:57.462814] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:14.658 [2024-07-15 16:13:57.462824] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:14.658 [2024-07-15 16:13:57.462833] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:14.658 [2024-07-15 16:13:57.462842] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:14.658 [2024-07-15 16:13:57.462849] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:14.658 [2024-07-15 16:13:57.462857] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:14.658 [2024-07-15 16:13:57.462887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.470751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.470776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.478747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.478771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.486746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.486772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.494762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.494789] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:14.658 [2024-07-15 16:13:57.494798] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:14.658 [2024-07-15 16:13:57.494804] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:14.658 [2024-07-15 16:13:57.494810] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:14.658 [2024-07-15 16:13:57.494820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:14.658 [2024-07-15 16:13:57.494831] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:14.658 [2024-07-15 16:13:57.494838] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:14.658 [2024-07-15 16:13:57.494847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.494858] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:14.658 [2024-07-15 16:13:57.494865] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:14.658 [2024-07-15 16:13:57.494874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.494885] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:14.658 [2024-07-15 16:13:57.494896] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:14.658 [2024-07-15 16:13:57.494906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:14.658 [2024-07-15 16:13:57.502746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.502788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:14.658 [2024-07-15 16:13:57.502802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:14.658 ===================================================== 00:15:14.658 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.658 ===================================================== 00:15:14.658 Controller Capabilities/Features 00:15:14.658 ================================ 00:15:14.658 Vendor ID: 4e58 00:15:14.658 Subsystem Vendor ID: 4e58 00:15:14.658 Serial Number: SPDK2 00:15:14.658 Model Number: SPDK bdev Controller 00:15:14.658 Firmware Version: 24.05.1 00:15:14.658 Recommended Arb Burst: 6 00:15:14.658 IEEE OUI Identifier: 8d 6b 50 00:15:14.658 Multi-path I/O 00:15:14.658 May have multiple subsystem ports: Yes 00:15:14.658 May have multiple controllers: Yes 00:15:14.658 Associated with SR-IOV VF: No 00:15:14.658 Max Data Transfer Size: 131072 00:15:14.658 Max Number of Namespaces: 32 00:15:14.658 Max Number of I/O Queues: 127 00:15:14.658 NVMe Specification Version (VS): 1.3 00:15:14.658 NVMe Specification Version (Identify): 1.3 00:15:14.658 Maximum Queue Entries: 256 00:15:14.658 Contiguous Queues Required: Yes 00:15:14.658 Arbitration Mechanisms Supported 00:15:14.658 Weighted Round Robin: Not Supported 00:15:14.658 Vendor Specific: Not Supported 00:15:14.658 Reset Timeout: 15000 ms 00:15:14.658 Doorbell Stride: 4 bytes 00:15:14.658 NVM Subsystem Reset: Not Supported 00:15:14.658 Command Sets Supported 00:15:14.658 NVM Command Set: Supported 00:15:14.658 Boot Partition: Not Supported 00:15:14.658 Memory Page Size Minimum: 4096 bytes 00:15:14.658 Memory Page Size Maximum: 4096 bytes 00:15:14.658 Persistent Memory Region: Not Supported 00:15:14.658 Optional Asynchronous Events Supported 00:15:14.658 Namespace Attribute Notices: Supported 00:15:14.658 Firmware Activation Notices: Not Supported 00:15:14.658 ANA Change Notices: Not Supported 00:15:14.658 PLE Aggregate Log Change Notices: Not Supported 00:15:14.658 LBA Status Info Alert Notices: Not Supported 00:15:14.658 EGE Aggregate Log Change Notices: Not Supported 00:15:14.658 Normal NVM Subsystem Shutdown event: Not Supported 00:15:14.658 Zone Descriptor Change Notices: Not Supported 00:15:14.658 Discovery Log Change Notices: Not Supported 00:15:14.658 Controller Attributes 00:15:14.658 128-bit Host Identifier: Supported 00:15:14.658 Non-Operational Permissive Mode: Not Supported 00:15:14.658 NVM Sets: Not Supported 00:15:14.658 Read Recovery Levels: Not Supported 00:15:14.658 Endurance Groups: Not Supported 00:15:14.658 Predictable Latency Mode: Not Supported 00:15:14.658 Traffic Based Keep ALive: Not Supported 00:15:14.658 Namespace Granularity: Not Supported 00:15:14.658 SQ Associations: Not Supported 00:15:14.658 UUID List: Not Supported 00:15:14.658 Multi-Domain Subsystem: Not Supported 00:15:14.658 Fixed Capacity Management: Not Supported 00:15:14.658 Variable Capacity Management: Not Supported 00:15:14.658 Delete Endurance Group: Not Supported 00:15:14.658 Delete NVM Set: Not Supported 00:15:14.658 Extended LBA Formats Supported: Not Supported 00:15:14.658 Flexible Data Placement Supported: Not Supported 00:15:14.658 00:15:14.658 Controller Memory Buffer Support 00:15:14.658 ================================ 00:15:14.658 Supported: No 00:15:14.658 00:15:14.658 Persistent Memory Region Support 00:15:14.658 ================================ 00:15:14.658 Supported: No 00:15:14.658 00:15:14.658 Admin Command Set Attributes 00:15:14.658 ============================ 00:15:14.658 Security Send/Receive: Not Supported 00:15:14.658 Format NVM: Not Supported 00:15:14.658 Firmware Activate/Download: Not Supported 00:15:14.658 Namespace Management: Not Supported 00:15:14.658 Device Self-Test: Not Supported 00:15:14.658 Directives: Not Supported 00:15:14.658 NVMe-MI: Not Supported 00:15:14.658 Virtualization Management: Not Supported 00:15:14.658 Doorbell Buffer Config: Not Supported 00:15:14.658 Get LBA Status Capability: Not Supported 00:15:14.658 Command & Feature Lockdown Capability: Not Supported 00:15:14.658 Abort Command Limit: 4 00:15:14.658 Async Event Request Limit: 4 00:15:14.658 Number of Firmware Slots: N/A 00:15:14.658 Firmware Slot 1 Read-Only: N/A 00:15:14.658 Firmware Activation Without Reset: N/A 00:15:14.658 Multiple Update Detection Support: N/A 00:15:14.658 Firmware Update Granularity: No Information Provided 00:15:14.658 Per-Namespace SMART Log: No 00:15:14.658 Asymmetric Namespace Access Log Page: Not Supported 00:15:14.658 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:14.658 Command Effects Log Page: Supported 00:15:14.658 Get Log Page Extended Data: Supported 00:15:14.658 Telemetry Log Pages: Not Supported 00:15:14.658 Persistent Event Log Pages: Not Supported 00:15:14.658 Supported Log Pages Log Page: May Support 00:15:14.658 Commands Supported & Effects Log Page: Not Supported 00:15:14.658 Feature Identifiers & Effects Log Page:May Support 00:15:14.658 NVMe-MI Commands & Effects Log Page: May Support 00:15:14.658 Data Area 4 for Telemetry Log: Not Supported 00:15:14.658 Error Log Page Entries Supported: 128 00:15:14.658 Keep Alive: Supported 00:15:14.658 Keep Alive Granularity: 10000 ms 00:15:14.658 00:15:14.658 NVM Command Set Attributes 00:15:14.658 ========================== 00:15:14.658 Submission Queue Entry Size 00:15:14.658 Max: 64 00:15:14.658 Min: 64 00:15:14.658 Completion Queue Entry Size 00:15:14.658 Max: 16 00:15:14.658 Min: 16 00:15:14.658 Number of Namespaces: 32 00:15:14.658 Compare Command: Supported 00:15:14.658 Write Uncorrectable Command: Not Supported 00:15:14.658 Dataset Management Command: Supported 00:15:14.658 Write Zeroes Command: Supported 00:15:14.658 Set Features Save Field: Not Supported 00:15:14.658 Reservations: Not Supported 00:15:14.658 Timestamp: Not Supported 00:15:14.658 Copy: Supported 00:15:14.658 Volatile Write Cache: Present 00:15:14.658 Atomic Write Unit (Normal): 1 00:15:14.658 Atomic Write Unit (PFail): 1 00:15:14.658 Atomic Compare & Write Unit: 1 00:15:14.658 Fused Compare & Write: Supported 00:15:14.658 Scatter-Gather List 00:15:14.658 SGL Command Set: Supported (Dword aligned) 00:15:14.659 SGL Keyed: Not Supported 00:15:14.659 SGL Bit Bucket Descriptor: Not Supported 00:15:14.659 SGL Metadata Pointer: Not Supported 00:15:14.659 Oversized SGL: Not Supported 00:15:14.659 SGL Metadata Address: Not Supported 00:15:14.659 SGL Offset: Not Supported 00:15:14.659 Transport SGL Data Block: Not Supported 00:15:14.659 Replay Protected Memory Block: Not Supported 00:15:14.659 00:15:14.659 Firmware Slot Information 00:15:14.659 ========================= 00:15:14.659 Active slot: 1 00:15:14.659 Slot 1 Firmware Revision: 24.05.1 00:15:14.659 00:15:14.659 00:15:14.659 Commands Supported and Effects 00:15:14.659 ============================== 00:15:14.659 Admin Commands 00:15:14.659 -------------- 00:15:14.659 Get Log Page (02h): Supported 00:15:14.659 Identify (06h): Supported 00:15:14.659 Abort (08h): Supported 00:15:14.659 Set Features (09h): Supported 00:15:14.659 Get Features (0Ah): Supported 00:15:14.659 Asynchronous Event Request (0Ch): Supported 00:15:14.659 Keep Alive (18h): Supported 00:15:14.659 I/O Commands 00:15:14.659 ------------ 00:15:14.659 Flush (00h): Supported LBA-Change 00:15:14.659 Write (01h): Supported LBA-Change 00:15:14.659 Read (02h): Supported 00:15:14.659 Compare (05h): Supported 00:15:14.659 Write Zeroes (08h): Supported LBA-Change 00:15:14.659 Dataset Management (09h): Supported LBA-Change 00:15:14.659 Copy (19h): Supported LBA-Change 00:15:14.659 Unknown (79h): Supported LBA-Change 00:15:14.659 Unknown (7Ah): Supported 00:15:14.659 00:15:14.659 Error Log 00:15:14.659 ========= 00:15:14.659 00:15:14.659 Arbitration 00:15:14.659 =========== 00:15:14.659 Arbitration Burst: 1 00:15:14.659 00:15:14.659 Power Management 00:15:14.659 ================ 00:15:14.659 Number of Power States: 1 00:15:14.659 Current Power State: Power State #0 00:15:14.659 Power State #0: 00:15:14.659 Max Power: 0.00 W 00:15:14.659 Non-Operational State: Operational 00:15:14.659 Entry Latency: Not Reported 00:15:14.659 Exit Latency: Not Reported 00:15:14.659 Relative Read Throughput: 0 00:15:14.659 Relative Read Latency: 0 00:15:14.659 Relative Write Throughput: 0 00:15:14.659 Relative Write Latency: 0 00:15:14.659 Idle Power: Not Reported 00:15:14.659 Active Power: Not Reported 00:15:14.659 Non-Operational Permissive Mode: Not Supported 00:15:14.659 00:15:14.659 Health Information 00:15:14.659 ================== 00:15:14.659 Critical Warnings: 00:15:14.659 Available Spare Space: OK 00:15:14.659 Temperature: OK 00:15:14.659 Device Reliability: OK 00:15:14.659 Read Only: No 00:15:14.659 Volatile Memory Backup: OK 00:15:14.659 Current Temperature: 0 Kelvin[2024-07-15 16:13:57.502919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:14.659 [2024-07-15 16:13:57.510750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:14.659 [2024-07-15 16:13:57.510792] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:14.659 [2024-07-15 16:13:57.510810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.659 [2024-07-15 16:13:57.510820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.659 [2024-07-15 16:13:57.510830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.659 [2024-07-15 16:13:57.510840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.659 [2024-07-15 16:13:57.514748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:14.659 [2024-07-15 16:13:57.514769] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:14.659 [2024-07-15 16:13:57.514920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.659 [2024-07-15 16:13:57.514998] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:14.659 [2024-07-15 16:13:57.515037] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:14.659 [2024-07-15 16:13:57.515933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:14.659 [2024-07-15 16:13:57.515957] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:14.659 [2024-07-15 16:13:57.516008] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:14.659 [2024-07-15 16:13:57.517230] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:14.659 (-273 Celsius) 00:15:14.659 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:14.659 Available Spare: 0% 00:15:14.659 Available Spare Threshold: 0% 00:15:14.659 Life Percentage Used: 0% 00:15:14.659 Data Units Read: 0 00:15:14.659 Data Units Written: 0 00:15:14.659 Host Read Commands: 0 00:15:14.659 Host Write Commands: 0 00:15:14.659 Controller Busy Time: 0 minutes 00:15:14.659 Power Cycles: 0 00:15:14.659 Power On Hours: 0 hours 00:15:14.659 Unsafe Shutdowns: 0 00:15:14.659 Unrecoverable Media Errors: 0 00:15:14.659 Lifetime Error Log Entries: 0 00:15:14.659 Warning Temperature Time: 0 minutes 00:15:14.659 Critical Temperature Time: 0 minutes 00:15:14.659 00:15:14.659 Number of Queues 00:15:14.659 ================ 00:15:14.659 Number of I/O Submission Queues: 127 00:15:14.659 Number of I/O Completion Queues: 127 00:15:14.659 00:15:14.659 Active Namespaces 00:15:14.659 ================= 00:15:14.659 Namespace ID:1 00:15:14.659 Error Recovery Timeout: Unlimited 00:15:14.659 Command Set Identifier: NVM (00h) 00:15:14.659 Deallocate: Supported 00:15:14.659 Deallocated/Unwritten Error: Not Supported 00:15:14.659 Deallocated Read Value: Unknown 00:15:14.659 Deallocate in Write Zeroes: Not Supported 00:15:14.659 Deallocated Guard Field: 0xFFFF 00:15:14.659 Flush: Supported 00:15:14.659 Reservation: Supported 00:15:14.659 Namespace Sharing Capabilities: Multiple Controllers 00:15:14.659 Size (in LBAs): 131072 (0GiB) 00:15:14.659 Capacity (in LBAs): 131072 (0GiB) 00:15:14.659 Utilization (in LBAs): 131072 (0GiB) 00:15:14.659 NGUID: BB317FF40F09489388375E1D31B48FF0 00:15:14.659 UUID: bb317ff4-0f09-4893-8837-5e1d31b48ff0 00:15:14.659 Thin Provisioning: Not Supported 00:15:14.659 Per-NS Atomic Units: Yes 00:15:14.659 Atomic Boundary Size (Normal): 0 00:15:14.659 Atomic Boundary Size (PFail): 0 00:15:14.659 Atomic Boundary Offset: 0 00:15:14.659 Maximum Single Source Range Length: 65535 00:15:14.659 Maximum Copy Length: 65535 00:15:14.659 Maximum Source Range Count: 1 00:15:14.659 NGUID/EUI64 Never Reused: No 00:15:14.659 Namespace Write Protected: No 00:15:14.659 Number of LBA Formats: 1 00:15:14.659 Current LBA Format: LBA Format #00 00:15:14.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:14.659 00:15:14.659 16:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:14.659 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.917 [2024-07-15 16:13:57.741774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.175 Initializing NVMe Controllers 00:15:20.175 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:20.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:20.175 Initialization complete. Launching workers. 00:15:20.175 ======================================================== 00:15:20.175 Latency(us) 00:15:20.175 Device Information : IOPS MiB/s Average min max 00:15:20.175 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36886.27 144.09 3469.48 1124.48 7538.35 00:15:20.175 ======================================================== 00:15:20.175 Total : 36886.27 144.09 3469.48 1124.48 7538.35 00:15:20.175 00:15:20.175 [2024-07-15 16:14:02.848102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.175 16:14:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:20.175 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.175 [2024-07-15 16:14:03.092784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.437 Initializing NVMe Controllers 00:15:25.437 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.437 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:25.437 Initialization complete. Launching workers. 00:15:25.437 ======================================================== 00:15:25.437 Latency(us) 00:15:25.437 Device Information : IOPS MiB/s Average min max 00:15:25.437 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34403.55 134.39 3719.72 1171.12 7726.88 00:15:25.437 ======================================================== 00:15:25.437 Total : 34403.55 134.39 3719.72 1171.12 7726.88 00:15:25.437 00:15:25.437 [2024-07-15 16:14:08.118262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.437 16:14:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:25.437 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.437 [2024-07-15 16:14:08.327112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.697 [2024-07-15 16:14:13.466117] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.697 Initializing NVMe Controllers 00:15:30.697 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.697 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.697 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:30.697 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:30.697 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:30.697 Initialization complete. Launching workers. 00:15:30.697 Starting thread on core 2 00:15:30.697 Starting thread on core 3 00:15:30.697 Starting thread on core 1 00:15:30.697 16:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:30.697 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.954 [2024-07-15 16:14:13.780295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.238 [2024-07-15 16:14:16.843493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.238 Initializing NVMe Controllers 00:15:34.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:34.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:34.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:34.238 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:34.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:34.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:34.238 Initialization complete. Launching workers. 00:15:34.238 Starting thread on core 1 with urgent priority queue 00:15:34.238 Starting thread on core 2 with urgent priority queue 00:15:34.238 Starting thread on core 3 with urgent priority queue 00:15:34.238 Starting thread on core 0 with urgent priority queue 00:15:34.238 SPDK bdev Controller (SPDK2 ) core 0: 4435.33 IO/s 22.55 secs/100000 ios 00:15:34.238 SPDK bdev Controller (SPDK2 ) core 1: 4910.00 IO/s 20.37 secs/100000 ios 00:15:34.238 SPDK bdev Controller (SPDK2 ) core 2: 3796.33 IO/s 26.34 secs/100000 ios 00:15:34.238 SPDK bdev Controller (SPDK2 ) core 3: 4506.67 IO/s 22.19 secs/100000 ios 00:15:34.238 ======================================================== 00:15:34.238 00:15:34.238 16:14:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:34.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.238 [2024-07-15 16:14:17.137412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.238 Initializing NVMe Controllers 00:15:34.238 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.238 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.238 Namespace ID: 1 size: 0GB 00:15:34.238 Initialization complete. 00:15:34.238 INFO: using host memory buffer for IO 00:15:34.238 Hello world! 00:15:34.238 [2024-07-15 16:14:17.146475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.238 16:14:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:34.496 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.496 [2024-07-15 16:14:17.419979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.901 Initializing NVMe Controllers 00:15:35.901 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.901 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.901 Initialization complete. Launching workers. 00:15:35.901 submit (in ns) avg, min, max = 8398.6, 3503.3, 4024970.0 00:15:35.901 complete (in ns) avg, min, max = 25606.4, 2047.8, 4015468.9 00:15:35.901 00:15:35.901 Submit histogram 00:15:35.901 ================ 00:15:35.901 Range in us Cumulative Count 00:15:35.901 3.484 - 3.508: 0.0149% ( 2) 00:15:35.901 3.508 - 3.532: 0.5068% ( 66) 00:15:35.901 3.532 - 3.556: 1.9826% ( 198) 00:15:35.901 3.556 - 3.579: 5.6048% ( 486) 00:15:35.901 3.579 - 3.603: 11.3066% ( 765) 00:15:35.901 3.603 - 3.627: 19.6840% ( 1124) 00:15:35.901 3.627 - 3.650: 29.3732% ( 1300) 00:15:35.901 3.650 - 3.674: 38.4736% ( 1221) 00:15:35.901 3.674 - 3.698: 46.3293% ( 1054) 00:15:35.901 3.698 - 3.721: 53.6782% ( 986) 00:15:35.901 3.721 - 3.745: 58.7687% ( 683) 00:15:35.901 3.745 - 3.769: 62.9276% ( 558) 00:15:35.901 3.769 - 3.793: 66.2890% ( 451) 00:15:35.901 3.793 - 3.816: 70.1573% ( 519) 00:15:35.901 3.816 - 3.840: 73.7199% ( 478) 00:15:35.901 3.840 - 3.864: 77.3794% ( 491) 00:15:35.901 3.864 - 3.887: 80.6887% ( 444) 00:15:35.901 3.887 - 3.911: 83.8861% ( 429) 00:15:35.901 3.911 - 3.935: 86.3904% ( 336) 00:15:35.901 3.935 - 3.959: 88.3581% ( 264) 00:15:35.901 3.959 - 3.982: 89.8860% ( 205) 00:15:35.901 3.982 - 4.006: 91.4362% ( 208) 00:15:35.901 4.006 - 4.030: 92.5617% ( 151) 00:15:35.901 4.030 - 4.053: 93.5455% ( 132) 00:15:35.901 4.053 - 4.077: 94.2312% ( 92) 00:15:35.901 4.077 - 4.101: 94.9914% ( 102) 00:15:35.901 4.101 - 4.124: 95.5877% ( 80) 00:15:35.901 4.124 - 4.148: 95.8933% ( 41) 00:15:35.901 4.148 - 4.172: 96.1467% ( 34) 00:15:35.901 4.172 - 4.196: 96.3554% ( 28) 00:15:35.901 4.196 - 4.219: 96.5864% ( 31) 00:15:35.901 4.219 - 4.243: 96.7206% ( 18) 00:15:35.901 4.243 - 4.267: 96.7802% ( 8) 00:15:35.901 4.267 - 4.290: 96.9442% ( 22) 00:15:35.901 4.290 - 4.314: 97.0336% ( 12) 00:15:35.902 4.314 - 4.338: 97.1305% ( 13) 00:15:35.902 4.338 - 4.361: 97.2199% ( 12) 00:15:35.902 4.361 - 4.385: 97.2721% ( 7) 00:15:35.902 4.409 - 4.433: 97.3019% ( 4) 00:15:35.902 4.433 - 4.456: 97.3317% ( 4) 00:15:35.902 4.456 - 4.480: 97.3392% ( 1) 00:15:35.902 4.480 - 4.504: 97.3466% ( 1) 00:15:35.902 4.504 - 4.527: 97.3541% ( 1) 00:15:35.902 4.527 - 4.551: 97.3616% ( 1) 00:15:35.902 4.551 - 4.575: 97.3690% ( 1) 00:15:35.902 4.575 - 4.599: 97.3839% ( 2) 00:15:35.902 4.646 - 4.670: 97.3914% ( 1) 00:15:35.902 4.670 - 4.693: 97.4063% ( 2) 00:15:35.902 4.693 - 4.717: 97.4361% ( 4) 00:15:35.902 4.717 - 4.741: 97.4510% ( 2) 00:15:35.902 4.741 - 4.764: 97.4957% ( 6) 00:15:35.902 4.764 - 4.788: 97.5553% ( 8) 00:15:35.902 4.788 - 4.812: 97.5852% ( 4) 00:15:35.902 4.812 - 4.836: 97.6373% ( 7) 00:15:35.902 4.836 - 4.859: 97.7044% ( 9) 00:15:35.902 4.859 - 4.883: 97.7491% ( 6) 00:15:35.902 4.883 - 4.907: 97.7938% ( 6) 00:15:35.902 4.907 - 4.930: 97.8684% ( 10) 00:15:35.902 4.930 - 4.954: 97.9205% ( 7) 00:15:35.902 4.954 - 4.978: 97.9429% ( 3) 00:15:35.902 4.978 - 5.001: 97.9802% ( 5) 00:15:35.902 5.001 - 5.025: 98.0025% ( 3) 00:15:35.902 5.025 - 5.049: 98.0249% ( 3) 00:15:35.902 5.049 - 5.073: 98.0845% ( 8) 00:15:35.902 5.073 - 5.096: 98.1143% ( 4) 00:15:35.902 5.096 - 5.120: 98.1218% ( 1) 00:15:35.902 5.120 - 5.144: 98.1441% ( 3) 00:15:35.902 5.144 - 5.167: 98.1665% ( 3) 00:15:35.902 5.167 - 5.191: 98.1740% ( 1) 00:15:35.902 5.191 - 5.215: 98.1814% ( 1) 00:15:35.902 5.215 - 5.239: 98.1889% ( 1) 00:15:35.902 5.262 - 5.286: 98.2038% ( 2) 00:15:35.902 5.310 - 5.333: 98.2112% ( 1) 00:15:35.902 5.357 - 5.381: 98.2336% ( 3) 00:15:35.902 5.428 - 5.452: 98.2410% ( 1) 00:15:35.902 5.499 - 5.523: 98.2485% ( 1) 00:15:35.902 5.570 - 5.594: 98.2559% ( 1) 00:15:35.902 5.618 - 5.641: 98.2634% ( 1) 00:15:35.902 5.784 - 5.807: 98.2709% ( 1) 00:15:35.902 5.879 - 5.902: 98.2783% ( 1) 00:15:35.902 6.021 - 6.044: 98.2858% ( 1) 00:15:35.902 6.116 - 6.163: 98.2932% ( 1) 00:15:35.902 6.305 - 6.353: 98.3007% ( 1) 00:15:35.902 6.542 - 6.590: 98.3081% ( 1) 00:15:35.902 6.637 - 6.684: 98.3230% ( 2) 00:15:35.902 6.732 - 6.779: 98.3305% ( 1) 00:15:35.902 6.779 - 6.827: 98.3454% ( 2) 00:15:35.902 6.827 - 6.874: 98.3528% ( 1) 00:15:35.902 6.921 - 6.969: 98.3752% ( 3) 00:15:35.902 7.016 - 7.064: 98.3976% ( 3) 00:15:35.902 7.064 - 7.111: 98.4125% ( 2) 00:15:35.902 7.159 - 7.206: 98.4199% ( 1) 00:15:35.902 7.206 - 7.253: 98.4274% ( 1) 00:15:35.902 7.253 - 7.301: 98.4423% ( 2) 00:15:35.902 7.301 - 7.348: 98.4497% ( 1) 00:15:35.902 7.348 - 7.396: 98.4572% ( 1) 00:15:35.902 7.443 - 7.490: 98.4646% ( 1) 00:15:35.902 7.490 - 7.538: 98.4795% ( 2) 00:15:35.902 7.538 - 7.585: 98.4944% ( 2) 00:15:35.902 7.585 - 7.633: 98.5094% ( 2) 00:15:35.902 7.775 - 7.822: 98.5243% ( 2) 00:15:35.902 7.822 - 7.870: 98.5541% ( 4) 00:15:35.902 7.870 - 7.917: 98.5839% ( 4) 00:15:35.902 7.964 - 8.012: 98.6062% ( 3) 00:15:35.902 8.059 - 8.107: 98.6137% ( 1) 00:15:35.902 8.107 - 8.154: 98.6212% ( 1) 00:15:35.902 8.201 - 8.249: 98.6435% ( 3) 00:15:35.902 8.249 - 8.296: 98.6510% ( 1) 00:15:35.902 8.770 - 8.818: 98.6584% ( 1) 00:15:35.902 8.865 - 8.913: 98.6659% ( 1) 00:15:35.902 9.055 - 9.102: 98.6733% ( 1) 00:15:35.902 9.292 - 9.339: 98.6882% ( 2) 00:15:35.902 9.339 - 9.387: 98.7031% ( 2) 00:15:35.902 9.434 - 9.481: 98.7106% ( 1) 00:15:35.902 9.624 - 9.671: 98.7180% ( 1) 00:15:35.902 9.671 - 9.719: 98.7255% ( 1) 00:15:35.902 9.956 - 10.003: 98.7404% ( 2) 00:15:35.902 10.050 - 10.098: 98.7479% ( 1) 00:15:35.902 10.098 - 10.145: 98.7553% ( 1) 00:15:35.902 10.287 - 10.335: 98.7628% ( 1) 00:15:35.902 10.667 - 10.714: 98.7777% ( 2) 00:15:35.902 10.856 - 10.904: 98.7851% ( 1) 00:15:35.902 10.999 - 11.046: 98.7926% ( 1) 00:15:35.902 11.236 - 11.283: 98.8000% ( 1) 00:15:35.902 11.283 - 11.330: 98.8075% ( 1) 00:15:35.902 11.378 - 11.425: 98.8149% ( 1) 00:15:35.902 11.520 - 11.567: 98.8224% ( 1) 00:15:35.902 11.662 - 11.710: 98.8298% ( 1) 00:15:35.902 11.804 - 11.852: 98.8373% ( 1) 00:15:35.902 12.041 - 12.089: 98.8447% ( 1) 00:15:35.902 12.136 - 12.231: 98.8522% ( 1) 00:15:35.902 12.231 - 12.326: 98.8597% ( 1) 00:15:35.902 12.326 - 12.421: 98.8671% ( 1) 00:15:35.902 12.421 - 12.516: 98.8746% ( 1) 00:15:35.902 12.516 - 12.610: 98.8820% ( 1) 00:15:35.902 12.705 - 12.800: 98.9044% ( 3) 00:15:35.902 12.800 - 12.895: 98.9118% ( 1) 00:15:35.902 12.990 - 13.084: 98.9193% ( 1) 00:15:35.902 13.084 - 13.179: 98.9267% ( 1) 00:15:35.902 13.559 - 13.653: 98.9416% ( 2) 00:15:35.902 13.653 - 13.748: 98.9565% ( 2) 00:15:35.902 13.843 - 13.938: 98.9640% ( 1) 00:15:35.902 14.127 - 14.222: 98.9715% ( 1) 00:15:35.902 14.412 - 14.507: 98.9864% ( 2) 00:15:35.902 15.455 - 15.550: 98.9938% ( 1) 00:15:35.902 17.067 - 17.161: 99.0013% ( 1) 00:15:35.902 17.256 - 17.351: 99.0236% ( 3) 00:15:35.902 17.351 - 17.446: 99.0534% ( 4) 00:15:35.902 17.446 - 17.541: 99.0758% ( 3) 00:15:35.902 17.541 - 17.636: 99.1205% ( 6) 00:15:35.902 17.636 - 17.730: 99.1503% ( 4) 00:15:35.902 17.730 - 17.825: 99.2025% ( 7) 00:15:35.902 17.825 - 17.920: 99.2472% ( 6) 00:15:35.902 17.920 - 18.015: 99.3068% ( 8) 00:15:35.902 18.015 - 18.110: 99.3516% ( 6) 00:15:35.902 18.110 - 18.204: 99.4559% ( 14) 00:15:35.902 18.204 - 18.299: 99.5006% ( 6) 00:15:35.902 18.299 - 18.394: 99.5901% ( 12) 00:15:35.902 18.394 - 18.489: 99.6422% ( 7) 00:15:35.902 18.489 - 18.584: 99.7168% ( 10) 00:15:35.902 18.584 - 18.679: 99.7317% ( 2) 00:15:35.902 18.679 - 18.773: 99.7689% ( 5) 00:15:35.902 18.773 - 18.868: 99.7988% ( 4) 00:15:35.902 18.868 - 18.963: 99.8137% ( 2) 00:15:35.902 18.963 - 19.058: 99.8286% ( 2) 00:15:35.902 19.058 - 19.153: 99.8360% ( 1) 00:15:35.902 19.153 - 19.247: 99.8584% ( 3) 00:15:35.902 19.247 - 19.342: 99.8733% ( 2) 00:15:35.902 20.670 - 20.764: 99.8807% ( 1) 00:15:35.902 24.178 - 24.273: 99.8882% ( 1) 00:15:35.902 3980.705 - 4004.978: 99.9925% ( 14) 00:15:35.902 4004.978 - 4029.250: 100.0000% ( 1) 00:15:35.902 00:15:35.902 Complete histogram 00:15:35.902 ================== 00:15:35.902 Range in us Cumulative Count 00:15:35.902 2.039 - 2.050: 0.1863% ( 25) 00:15:35.902 2.050 - 2.062: 19.9299% ( 2649) 00:15:35.902 2.062 - 2.074: 38.3767% ( 2475) 00:15:35.902 2.074 - 2.086: 41.6114% ( 434) 00:15:35.902 2.086 - 2.098: 53.3204% ( 1571) 00:15:35.902 2.098 - 2.110: 59.1712% ( 785) 00:15:35.902 2.110 - 2.121: 62.0929% ( 392) 00:15:35.902 2.121 - 2.133: 73.5038% ( 1531) 00:15:35.902 2.133 - 2.145: 77.1782% ( 493) 00:15:35.902 2.145 - 2.157: 78.9074% ( 232) 00:15:35.902 2.157 - 2.169: 83.5954% ( 629) 00:15:35.902 2.169 - 2.181: 85.3693% ( 238) 00:15:35.902 2.181 - 2.193: 86.5544% ( 159) 00:15:35.902 2.193 - 2.204: 89.2226% ( 358) 00:15:35.902 2.204 - 2.216: 91.2648% ( 274) 00:15:35.902 2.216 - 2.228: 92.9194% ( 222) 00:15:35.902 2.228 - 2.240: 94.1269% ( 162) 00:15:35.902 2.240 - 2.252: 94.6337% ( 68) 00:15:35.902 2.252 - 2.264: 94.7902% ( 21) 00:15:35.902 2.264 - 2.276: 95.0660% ( 37) 00:15:35.902 2.276 - 2.287: 95.4908% ( 57) 00:15:35.902 2.287 - 2.299: 95.9305% ( 59) 00:15:35.902 2.299 - 2.311: 96.0125% ( 11) 00:15:35.902 2.311 - 2.323: 96.0796% ( 9) 00:15:35.902 2.323 - 2.335: 96.1541% ( 10) 00:15:35.902 2.335 - 2.347: 96.3032% ( 20) 00:15:35.902 2.347 - 2.359: 96.6013% ( 40) 00:15:35.903 2.359 - 2.370: 96.9293% ( 44) 00:15:35.903 2.370 - 2.382: 97.1976% ( 36) 00:15:35.903 2.382 - 2.394: 97.4883% ( 39) 00:15:35.903 2.394 - 2.406: 97.6522% ( 22) 00:15:35.903 2.406 - 2.418: 97.7789% ( 17) 00:15:35.903 2.418 - 2.430: 97.9355% ( 21) 00:15:35.903 2.430 - 2.441: 98.0323% ( 13) 00:15:35.903 2.441 - 2.453: 98.1292% ( 13) 00:15:35.903 2.453 - 2.465: 98.1889% ( 8) 00:15:35.903 2.465 - 2.477: 98.2261% ( 5) 00:15:35.903 2.477 - 2.489: 98.2783% ( 7) 00:15:35.903 2.489 - 2.501: 98.3230% ( 6) 00:15:35.903 2.501 - 2.513: 98.3752% ( 7) 00:15:35.903 2.513 - 2.524: 98.4274% ( 7) 00:15:35.903 2.524 - 2.536: 98.4497% ( 3) 00:15:35.903 2.536 - 2.548: 98.4572% ( 1) 00:15:35.903 2.548 - 2.560: 98.4721% ( 2) 00:15:35.903 2.560 - 2.572: 98.4795% ( 1) 00:15:35.903 2.607 - 2.619: 98.4944% ( 2) 00:15:35.903 2.643 - 2.655: 98.5019% ( 1) 00:15:35.903 2.667 - 2.679: 98.5094% ( 1) 00:15:35.903 2.726 - 2.738: 98.5168% ( 1) 00:15:35.903 3.295 - 3.319: 98.5317% ( 2) 00:15:35.903 3.342 - 3.366: 98.5392% ( 1) 00:15:35.903 3.366 - 3.390: 98.5541% ( 2) 00:15:35.903 3.390 - 3.413: 98.5615% ( 1) 00:15:35.903 3.437 - 3.461: 98.5764% ( 2) 00:15:35.903 3.461 - 3.484: 98.5913% ( 2) 00:15:35.903 3.508 - 3.532: 98.6062% ( 2) 00:15:35.903 3.532 - 3.556: 98.6137% ( 1) 00:15:35.903 3.579 - 3.603: 98.6212% ( 1) 00:15:35.903 3.627 - 3.650: 98.6286% ( 1) 00:15:35.903 3.650 - 3.674: 98.6361% ( 1) 00:15:35.903 3.674 - 3.698: 98.6510% ( 2) 00:15:35.903 3.698 - 3.721: 98.6584% ( 1) 00:15:35.903 3.769 - 3.793: 98.6659% ( 1) 00:15:35.903 3.840 - 3.864: 98.6733% ( 1) 00:15:35.903 3.864 - 3.887: 98.6882% ( 2) 00:15:35.903 3.887 - 3.911: 98.6957% ( 1) 00:15:35.903 3.911 - 3.935: 98.7031% ( 1) 00:15:35.903 3.935 - 3.959: 98.7106% ( 1) 00:15:35.903 3.982 - 4.006: 98.7330% ( 3) 00:15:35.903 4.053 - 4.077: 98.7404% ( 1) 00:15:35.903 5.025 - 5.049: 98.7479% ( 1) 00:15:35.903 5.120 - 5.144: 98.7553% ( 1) 00:15:35.903 5.404 - 5.428: 98.7777% ( 3) 00:15:35.903 5.428 - 5.452: 98.7926% ( 2) 00:15:35.903 5.760 - 5.784: 98.8075% ( 2) 00:15:35.903 5.807 - 5.831: 98.8149% ( 1) 00:15:35.903 5.902 - 5.926: 98.8224% ( 1) 00:15:35.903 5.950 - 5.973: 98.8298% ( 1) 00:15:35.903 6.068 - 6.116: 98.8373% ( 1) 00:15:35.903 6.116 - 6.163: 9[2024-07-15 16:14:18.516538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.903 8.8522% ( 2) 00:15:35.903 6.163 - 6.210: 98.8671% ( 2) 00:15:35.903 6.258 - 6.305: 98.8746% ( 1) 00:15:35.903 6.495 - 6.542: 98.8820% ( 1) 00:15:35.903 6.637 - 6.684: 98.8895% ( 1) 00:15:35.903 6.779 - 6.827: 98.8969% ( 1) 00:15:35.903 7.206 - 7.253: 98.9044% ( 1) 00:15:35.903 10.193 - 10.240: 98.9118% ( 1) 00:15:35.903 15.360 - 15.455: 98.9193% ( 1) 00:15:35.903 15.455 - 15.550: 98.9342% ( 2) 00:15:35.903 15.550 - 15.644: 98.9416% ( 1) 00:15:35.903 15.644 - 15.739: 98.9491% ( 1) 00:15:35.903 15.834 - 15.929: 98.9938% ( 6) 00:15:35.903 15.929 - 16.024: 99.0162% ( 3) 00:15:35.903 16.024 - 16.119: 99.0460% ( 4) 00:15:35.903 16.119 - 16.213: 99.0907% ( 6) 00:15:35.903 16.213 - 16.308: 99.1131% ( 3) 00:15:35.903 16.308 - 16.403: 99.1429% ( 4) 00:15:35.903 16.403 - 16.498: 99.1652% ( 3) 00:15:35.903 16.498 - 16.593: 99.2025% ( 5) 00:15:35.903 16.593 - 16.687: 99.2323% ( 4) 00:15:35.903 16.687 - 16.782: 99.2845% ( 7) 00:15:35.903 16.782 - 16.877: 99.3218% ( 5) 00:15:35.903 16.877 - 16.972: 99.3590% ( 5) 00:15:35.903 16.972 - 17.067: 99.3739% ( 2) 00:15:35.903 17.067 - 17.161: 99.3888% ( 2) 00:15:35.903 17.351 - 17.446: 99.3963% ( 1) 00:15:35.903 17.825 - 17.920: 99.4037% ( 1) 00:15:35.903 18.015 - 18.110: 99.4112% ( 1) 00:15:35.903 2160.261 - 2172.397: 99.4186% ( 1) 00:15:35.903 3980.705 - 4004.978: 99.9329% ( 69) 00:15:35.903 4004.978 - 4029.250: 100.0000% ( 9) 00:15:35.903 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.903 [ 00:15:35.903 { 00:15:35.903 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.903 "subtype": "Discovery", 00:15:35.903 "listen_addresses": [], 00:15:35.903 "allow_any_host": true, 00:15:35.903 "hosts": [] 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.903 "subtype": "NVMe", 00:15:35.903 "listen_addresses": [ 00:15:35.903 { 00:15:35.903 "trtype": "VFIOUSER", 00:15:35.903 "adrfam": "IPv4", 00:15:35.903 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.903 "trsvcid": "0" 00:15:35.903 } 00:15:35.903 ], 00:15:35.903 "allow_any_host": true, 00:15:35.903 "hosts": [], 00:15:35.903 "serial_number": "SPDK1", 00:15:35.903 "model_number": "SPDK bdev Controller", 00:15:35.903 "max_namespaces": 32, 00:15:35.903 "min_cntlid": 1, 00:15:35.903 "max_cntlid": 65519, 00:15:35.903 "namespaces": [ 00:15:35.903 { 00:15:35.903 "nsid": 1, 00:15:35.903 "bdev_name": "Malloc1", 00:15:35.903 "name": "Malloc1", 00:15:35.903 "nguid": "27DC491A98FA433E8C944EF8F969EA34", 00:15:35.903 "uuid": "27dc491a-98fa-433e-8c94-4ef8f969ea34" 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "nsid": 2, 00:15:35.903 "bdev_name": "Malloc3", 00:15:35.903 "name": "Malloc3", 00:15:35.903 "nguid": "EAB9B533A3ED42519D057AFEAF0690C3", 00:15:35.903 "uuid": "eab9b533-a3ed-4251-9d05-7afeaf0690c3" 00:15:35.903 } 00:15:35.903 ] 00:15:35.903 }, 00:15:35.903 { 00:15:35.903 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.903 "subtype": "NVMe", 00:15:35.903 "listen_addresses": [ 00:15:35.903 { 00:15:35.903 "trtype": "VFIOUSER", 00:15:35.903 "adrfam": "IPv4", 00:15:35.903 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.903 "trsvcid": "0" 00:15:35.903 } 00:15:35.903 ], 00:15:35.903 "allow_any_host": true, 00:15:35.903 "hosts": [], 00:15:35.903 "serial_number": "SPDK2", 00:15:35.903 "model_number": "SPDK bdev Controller", 00:15:35.903 "max_namespaces": 32, 00:15:35.903 "min_cntlid": 1, 00:15:35.903 "max_cntlid": 65519, 00:15:35.903 "namespaces": [ 00:15:35.903 { 00:15:35.903 "nsid": 1, 00:15:35.903 "bdev_name": "Malloc2", 00:15:35.903 "name": "Malloc2", 00:15:35.903 "nguid": "BB317FF40F09489388375E1D31B48FF0", 00:15:35.903 "uuid": "bb317ff4-0f09-4893-8837-5e1d31b48ff0" 00:15:35.903 } 00:15:35.903 ] 00:15:35.903 } 00:15:35.903 ] 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=287034 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.903 16:14:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:35.903 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.161 [2024-07-15 16:14:18.957214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.161 Malloc4 00:15:36.161 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:36.418 [2024-07-15 16:14:19.326898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.418 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.418 Asynchronous Event Request test 00:15:36.418 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.418 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.418 Registering asynchronous event callbacks... 00:15:36.418 Starting namespace attribute notice tests for all controllers... 00:15:36.418 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:36.418 aer_cb - Changed Namespace 00:15:36.418 Cleaning up... 00:15:36.676 [ 00:15:36.676 { 00:15:36.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.676 "subtype": "Discovery", 00:15:36.676 "listen_addresses": [], 00:15:36.676 "allow_any_host": true, 00:15:36.676 "hosts": [] 00:15:36.676 }, 00:15:36.676 { 00:15:36.676 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.676 "subtype": "NVMe", 00:15:36.676 "listen_addresses": [ 00:15:36.676 { 00:15:36.676 "trtype": "VFIOUSER", 00:15:36.676 "adrfam": "IPv4", 00:15:36.676 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.676 "trsvcid": "0" 00:15:36.676 } 00:15:36.676 ], 00:15:36.676 "allow_any_host": true, 00:15:36.676 "hosts": [], 00:15:36.676 "serial_number": "SPDK1", 00:15:36.676 "model_number": "SPDK bdev Controller", 00:15:36.676 "max_namespaces": 32, 00:15:36.676 "min_cntlid": 1, 00:15:36.676 "max_cntlid": 65519, 00:15:36.676 "namespaces": [ 00:15:36.676 { 00:15:36.676 "nsid": 1, 00:15:36.676 "bdev_name": "Malloc1", 00:15:36.676 "name": "Malloc1", 00:15:36.676 "nguid": "27DC491A98FA433E8C944EF8F969EA34", 00:15:36.676 "uuid": "27dc491a-98fa-433e-8c94-4ef8f969ea34" 00:15:36.676 }, 00:15:36.676 { 00:15:36.676 "nsid": 2, 00:15:36.676 "bdev_name": "Malloc3", 00:15:36.676 "name": "Malloc3", 00:15:36.676 "nguid": "EAB9B533A3ED42519D057AFEAF0690C3", 00:15:36.676 "uuid": "eab9b533-a3ed-4251-9d05-7afeaf0690c3" 00:15:36.676 } 00:15:36.676 ] 00:15:36.676 }, 00:15:36.676 { 00:15:36.676 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.676 "subtype": "NVMe", 00:15:36.676 "listen_addresses": [ 00:15:36.676 { 00:15:36.676 "trtype": "VFIOUSER", 00:15:36.676 "adrfam": "IPv4", 00:15:36.676 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.676 "trsvcid": "0" 00:15:36.676 } 00:15:36.676 ], 00:15:36.676 "allow_any_host": true, 00:15:36.676 "hosts": [], 00:15:36.676 "serial_number": "SPDK2", 00:15:36.676 "model_number": "SPDK bdev Controller", 00:15:36.676 "max_namespaces": 32, 00:15:36.676 "min_cntlid": 1, 00:15:36.676 "max_cntlid": 65519, 00:15:36.676 "namespaces": [ 00:15:36.676 { 00:15:36.676 "nsid": 1, 00:15:36.676 "bdev_name": "Malloc2", 00:15:36.676 "name": "Malloc2", 00:15:36.676 "nguid": "BB317FF40F09489388375E1D31B48FF0", 00:15:36.676 "uuid": "bb317ff4-0f09-4893-8837-5e1d31b48ff0" 00:15:36.676 }, 00:15:36.676 { 00:15:36.676 "nsid": 2, 00:15:36.676 "bdev_name": "Malloc4", 00:15:36.676 "name": "Malloc4", 00:15:36.676 "nguid": "4D6B51D6A8B44273983949201DA65645", 00:15:36.676 "uuid": "4d6b51d6-a8b4-4273-9839-49201da65645" 00:15:36.676 } 00:15:36.676 ] 00:15:36.676 } 00:15:36.676 ] 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 287034 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 281520 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 281520 ']' 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 281520 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 281520 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 281520' 00:15:36.676 killing process with pid 281520 00:15:36.676 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 281520 00:15:36.677 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 281520 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=287176 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 287176' 00:15:37.242 Process pid: 287176 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 287176 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 287176 ']' 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:37.242 16:14:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.242 [2024-07-15 16:14:20.008756] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:37.242 [2024-07-15 16:14:20.009891] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:37.242 [2024-07-15 16:14:20.009954] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.242 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.243 [2024-07-15 16:14:20.075450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.243 [2024-07-15 16:14:20.166058] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.243 [2024-07-15 16:14:20.166109] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.243 [2024-07-15 16:14:20.166124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.243 [2024-07-15 16:14:20.166135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.243 [2024-07-15 16:14:20.166145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.243 [2024-07-15 16:14:20.166225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.243 [2024-07-15 16:14:20.166291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.243 [2024-07-15 16:14:20.166335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.243 [2024-07-15 16:14:20.166337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.500 [2024-07-15 16:14:20.267478] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:37.500 [2024-07-15 16:14:20.267711] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:37.500 [2024-07-15 16:14:20.268012] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:37.500 [2024-07-15 16:14:20.268650] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:37.500 [2024-07-15 16:14:20.268898] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:37.500 16:14:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:37.500 16:14:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:37.500 16:14:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.432 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:38.690 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.690 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.690 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.690 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.690 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.950 Malloc1 00:15:38.950 16:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:39.209 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:39.467 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:39.724 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.724 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:39.724 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:39.982 Malloc2 00:15:39.982 16:14:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:40.240 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:40.497 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 287176 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 287176 ']' 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 287176 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 287176 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 287176' 00:15:40.755 killing process with pid 287176 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 287176 00:15:40.755 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 287176 00:15:41.012 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:41.012 16:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:41.012 00:15:41.012 real 0m52.288s 00:15:41.012 user 3m26.543s 00:15:41.012 sys 0m4.375s 00:15:41.012 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.012 16:14:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:41.012 ************************************ 00:15:41.012 END TEST nvmf_vfio_user 00:15:41.012 ************************************ 00:15:41.012 16:14:23 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.012 16:14:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:41.012 16:14:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.012 16:14:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.012 ************************************ 00:15:41.012 START TEST nvmf_vfio_user_nvme_compliance 00:15:41.012 ************************************ 00:15:41.012 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.012 * Looking for test storage... 00:15:41.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.270 16:14:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.270 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=287771 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 287771' 00:15:41.271 Process pid: 287771 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 287771 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 287771 ']' 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.271 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.271 [2024-07-15 16:14:24.060359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:41.271 [2024-07-15 16:14:24.060435] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.271 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.271 [2024-07-15 16:14:24.119023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.271 [2024-07-15 16:14:24.203237] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.271 [2024-07-15 16:14:24.203293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.271 [2024-07-15 16:14:24.203307] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.271 [2024-07-15 16:14:24.203318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.271 [2024-07-15 16:14:24.203327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.271 [2024-07-15 16:14:24.203377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.271 [2024-07-15 16:14:24.203532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.271 [2024-07-15 16:14:24.203535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.529 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:41.529 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:41.529 16:14:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.462 malloc0 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.462 16:14:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:42.718 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.718 00:15:42.718 00:15:42.718 CUnit - A unit testing framework for C - Version 2.1-3 00:15:42.718 http://cunit.sourceforge.net/ 00:15:42.718 00:15:42.718 00:15:42.718 Suite: nvme_compliance 00:15:42.718 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 16:14:25.550041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.718 [2024-07-15 16:14:25.551525] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:42.718 [2024-07-15 16:14:25.551549] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:42.718 [2024-07-15 16:14:25.551576] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:42.718 [2024-07-15 16:14:25.553051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.718 passed 00:15:42.718 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 16:14:25.639691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.718 [2024-07-15 16:14:25.642712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.718 passed 00:15:42.974 Test: admin_identify_ns ...[2024-07-15 16:14:25.729274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.974 [2024-07-15 16:14:25.787758] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:42.974 [2024-07-15 16:14:25.795766] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:42.974 [2024-07-15 16:14:25.816900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.974 passed 00:15:42.974 Test: admin_get_features_mandatory_features ...[2024-07-15 16:14:25.902918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.974 [2024-07-15 16:14:25.905937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.974 passed 00:15:43.231 Test: admin_get_features_optional_features ...[2024-07-15 16:14:25.992503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.231 [2024-07-15 16:14:25.995525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.231 passed 00:15:43.231 Test: admin_set_features_number_of_queues ...[2024-07-15 16:14:26.077961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.231 [2024-07-15 16:14:26.186992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.488 passed 00:15:43.488 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 16:14:26.272406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.488 [2024-07-15 16:14:26.275433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.488 passed 00:15:43.488 Test: admin_get_log_page_with_lpo ...[2024-07-15 16:14:26.356862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.488 [2024-07-15 16:14:26.424756] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:43.488 [2024-07-15 16:14:26.437893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.744 passed 00:15:43.744 Test: fabric_property_get ...[2024-07-15 16:14:26.521198] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.744 [2024-07-15 16:14:26.522466] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:43.744 [2024-07-15 16:14:26.524215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.744 passed 00:15:43.744 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 16:14:26.609781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.744 [2024-07-15 16:14:26.611101] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:43.744 [2024-07-15 16:14:26.612803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.744 passed 00:15:43.744 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 16:14:26.695926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.057 [2024-07-15 16:14:26.783752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.057 [2024-07-15 16:14:26.799749] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.057 [2024-07-15 16:14:26.804914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.057 passed 00:15:44.057 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 16:14:26.885602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.057 [2024-07-15 16:14:26.886906] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:44.057 [2024-07-15 16:14:26.890635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.057 passed 00:15:44.057 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 16:14:26.973862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.314 [2024-07-15 16:14:27.047746] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.314 [2024-07-15 16:14:27.071748] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.314 [2024-07-15 16:14:27.076866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.314 passed 00:15:44.314 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 16:14:27.162105] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.314 [2024-07-15 16:14:27.163401] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:44.314 [2024-07-15 16:14:27.163454] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:44.314 [2024-07-15 16:14:27.165132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.314 passed 00:15:44.314 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 16:14:27.248261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.571 [2024-07-15 16:14:27.339751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:44.571 [2024-07-15 16:14:27.347751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:44.571 [2024-07-15 16:14:27.355748] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:44.571 [2024-07-15 16:14:27.363750] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:44.571 [2024-07-15 16:14:27.392869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.571 passed 00:15:44.571 Test: admin_create_io_sq_verify_pc ...[2024-07-15 16:14:27.476560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.571 [2024-07-15 16:14:27.492763] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:44.571 [2024-07-15 16:14:27.509912] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.571 passed 00:15:44.827 Test: admin_create_io_qp_max_qps ...[2024-07-15 16:14:27.593517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.757 [2024-07-15 16:14:28.693755] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:46.322 [2024-07-15 16:14:29.074295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.322 passed 00:15:46.322 Test: admin_create_io_sq_shared_cq ...[2024-07-15 16:14:29.156311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.323 [2024-07-15 16:14:29.287761] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:46.581 [2024-07-15 16:14:29.319830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.581 passed 00:15:46.581 00:15:46.581 Run Summary: Type Total Ran Passed Failed Inactive 00:15:46.581 suites 1 1 n/a 0 0 00:15:46.581 tests 18 18 18 0 0 00:15:46.581 asserts 360 360 360 0 n/a 00:15:46.581 00:15:46.581 Elapsed time = 1.562 seconds 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 287771 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 287771 ']' 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 287771 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 287771 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 287771' 00:15:46.581 killing process with pid 287771 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 287771 00:15:46.581 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 287771 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:46.840 00:15:46.840 real 0m5.695s 00:15:46.840 user 0m16.000s 00:15:46.840 sys 0m0.544s 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.840 ************************************ 00:15:46.840 END TEST nvmf_vfio_user_nvme_compliance 00:15:46.840 ************************************ 00:15:46.840 16:14:29 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.840 16:14:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:46.840 16:14:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:46.840 16:14:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.840 ************************************ 00:15:46.840 START TEST nvmf_vfio_user_fuzz 00:15:46.840 ************************************ 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.840 * Looking for test storage... 00:15:46.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=288494 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 288494' 00:15:46.840 Process pid: 288494 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 288494 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 288494 ']' 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:46.840 16:14:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.099 16:14:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:47.099 16:14:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:47.099 16:14:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.470 malloc0 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:48.470 16:14:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:20.532 Fuzzing completed. Shutting down the fuzz application 00:16:20.532 00:16:20.532 Dumping successful admin opcodes: 00:16:20.532 8, 9, 10, 24, 00:16:20.532 Dumping successful io opcodes: 00:16:20.532 0, 00:16:20.532 NS: 0x200003a1ef00 I/O qp, Total commands completed: 598024, total successful commands: 2313, random_seed: 1208947776 00:16:20.532 NS: 0x200003a1ef00 admin qp, Total commands completed: 76826, total successful commands: 595, random_seed: 3555375232 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 288494 ']' 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 288494' 00:16:20.532 killing process with pid 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 288494 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:20.532 00:16:20.532 real 0m32.180s 00:16:20.532 user 0m31.562s 00:16:20.532 sys 0m28.726s 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:20.532 16:15:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.532 ************************************ 00:16:20.532 END TEST nvmf_vfio_user_fuzz 00:16:20.532 ************************************ 00:16:20.532 16:15:01 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:20.532 16:15:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:20.532 16:15:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:20.532 16:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.532 ************************************ 00:16:20.532 START TEST nvmf_host_management 00:16:20.532 ************************************ 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:20.532 * Looking for test storage... 00:16:20.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.532 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:20.533 16:15:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:21.101 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:21.101 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:21.101 Found net devices under 0000:84:00.0: cvl_0_0 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:21.101 Found net devices under 0000:84:00.1: cvl_0_1 00:16:21.101 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.102 16:15:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:16:21.102 00:16:21.102 --- 10.0.0.2 ping statistics --- 00:16:21.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.102 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:21.102 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:16:21.360 00:16:21.360 --- 10.0.0.1 ping statistics --- 00:16:21.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.360 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=293952 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 293952 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 293952 ']' 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.360 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.360 [2024-07-15 16:15:04.158658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:21.360 [2024-07-15 16:15:04.158755] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.360 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.360 [2024-07-15 16:15:04.230946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.360 [2024-07-15 16:15:04.324187] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.360 [2024-07-15 16:15:04.324248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.360 [2024-07-15 16:15:04.324275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.360 [2024-07-15 16:15:04.324288] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.360 [2024-07-15 16:15:04.324300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.360 [2024-07-15 16:15:04.324402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.360 [2024-07-15 16:15:04.324490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.360 [2024-07-15 16:15:04.324555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:21.360 [2024-07-15 16:15:04.324558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 [2024-07-15 16:15:04.486763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 Malloc0 00:16:21.618 [2024-07-15 16:15:04.547617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=294109 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 294109 /var/tmp/bdevperf.sock 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 294109 ']' 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:21.618 { 00:16:21.618 "params": { 00:16:21.618 "name": "Nvme$subsystem", 00:16:21.618 "trtype": "$TEST_TRANSPORT", 00:16:21.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.618 "adrfam": "ipv4", 00:16:21.618 "trsvcid": "$NVMF_PORT", 00:16:21.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.618 "hdgst": ${hdgst:-false}, 00:16:21.618 "ddgst": ${ddgst:-false} 00:16:21.618 }, 00:16:21.618 "method": "bdev_nvme_attach_controller" 00:16:21.618 } 00:16:21.618 EOF 00:16:21.618 )") 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:21.618 16:15:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:21.618 "params": { 00:16:21.618 "name": "Nvme0", 00:16:21.618 "trtype": "tcp", 00:16:21.618 "traddr": "10.0.0.2", 00:16:21.618 "adrfam": "ipv4", 00:16:21.618 "trsvcid": "4420", 00:16:21.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:21.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:21.618 "hdgst": false, 00:16:21.618 "ddgst": false 00:16:21.618 }, 00:16:21.618 "method": "bdev_nvme_attach_controller" 00:16:21.618 }' 00:16:21.876 [2024-07-15 16:15:04.627434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:21.876 [2024-07-15 16:15:04.627521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294109 ] 00:16:21.876 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.876 [2024-07-15 16:15:04.690499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.876 [2024-07-15 16:15:04.778485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.134 Running I/O for 10 seconds... 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:22.393 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.654 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.654 [2024-07-15 16:15:05.507078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.507976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.654 [2024-07-15 16:15:05.508295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.654 [2024-07-15 16:15:05.508311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.508973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.508989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:22.655 [2024-07-15 16:15:05.509269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.655 [2024-07-15 16:15:05.509366] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x171cd20 was disconnected and freed. reset controller. 00:16:22.655 [2024-07-15 16:15:05.510506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:22.655 task offset: 75392 on job bdev=Nvme0n1 fails 00:16:22.655 00:16:22.655 Latency(us) 00:16:22.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.655 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:22.655 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:22.655 Verification LBA range: start 0x0 length 0x400 00:16:22.655 Nvme0n1 : 0.40 1451.29 90.71 161.25 0.00 38538.98 2827.76 34758.35 00:16:22.655 =================================================================================================================== 00:16:22.655 Total : 1451.29 90.71 161.25 0.00 38538.98 2827.76 34758.35 00:16:22.655 [2024-07-15 16:15:05.512406] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:22.655 [2024-07-15 16:15:05.512436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130ba10 (9): Bad file descriptor 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.655 16:15:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:22.655 [2024-07-15 16:15:05.523815] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 294109 00:16:23.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (294109) - No such process 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:23.628 { 00:16:23.628 "params": { 00:16:23.628 "name": "Nvme$subsystem", 00:16:23.628 "trtype": "$TEST_TRANSPORT", 00:16:23.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.628 "adrfam": "ipv4", 00:16:23.628 "trsvcid": "$NVMF_PORT", 00:16:23.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.628 "hdgst": ${hdgst:-false}, 00:16:23.628 "ddgst": ${ddgst:-false} 00:16:23.628 }, 00:16:23.628 "method": "bdev_nvme_attach_controller" 00:16:23.628 } 00:16:23.628 EOF 00:16:23.628 )") 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:23.628 16:15:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:23.628 "params": { 00:16:23.628 "name": "Nvme0", 00:16:23.628 "trtype": "tcp", 00:16:23.628 "traddr": "10.0.0.2", 00:16:23.628 "adrfam": "ipv4", 00:16:23.628 "trsvcid": "4420", 00:16:23.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:23.628 "hdgst": false, 00:16:23.628 "ddgst": false 00:16:23.628 }, 00:16:23.628 "method": "bdev_nvme_attach_controller" 00:16:23.628 }' 00:16:23.628 [2024-07-15 16:15:06.565476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:23.628 [2024-07-15 16:15:06.565561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294386 ] 00:16:23.628 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.887 [2024-07-15 16:15:06.629214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.887 [2024-07-15 16:15:06.715675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.145 Running I/O for 1 seconds... 00:16:25.077 00:16:25.077 Latency(us) 00:16:25.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.077 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:25.077 Verification LBA range: start 0x0 length 0x400 00:16:25.077 Nvme0n1 : 1.03 1553.12 97.07 0.00 0.00 40566.22 6747.78 33593.27 00:16:25.077 =================================================================================================================== 00:16:25.077 Total : 1553.12 97.07 0.00 0.00 40566.22 6747.78 33593.27 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.333 rmmod nvme_tcp 00:16:25.333 rmmod nvme_fabrics 00:16:25.333 rmmod nvme_keyring 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 293952 ']' 00:16:25.333 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 293952 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 293952 ']' 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 293952 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 293952 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 293952' 00:16:25.334 killing process with pid 293952 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 293952 00:16:25.334 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 293952 00:16:25.590 [2024-07-15 16:15:08.487185] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.590 16:15:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.121 16:15:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.121 16:15:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:28.121 00:16:28.121 real 0m8.646s 00:16:28.121 user 0m19.641s 00:16:28.121 sys 0m2.670s 00:16:28.121 16:15:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:28.121 16:15:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:28.121 ************************************ 00:16:28.121 END TEST nvmf_host_management 00:16:28.121 ************************************ 00:16:28.121 16:15:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:28.121 16:15:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:28.121 16:15:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:28.121 16:15:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.121 ************************************ 00:16:28.121 START TEST nvmf_lvol 00:16:28.121 ************************************ 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:28.121 * Looking for test storage... 00:16:28.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.121 16:15:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.021 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:30.022 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:30.022 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:30.022 Found net devices under 0000:84:00.0: cvl_0_0 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:30.022 Found net devices under 0000:84:00.1: cvl_0_1 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:16:30.022 00:16:30.022 --- 10.0.0.2 ping statistics --- 00:16:30.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.022 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:16:30.022 00:16:30.022 --- 10.0.0.1 ping statistics --- 00:16:30.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.022 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=296988 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 296988 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 296988 ']' 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.022 16:15:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:30.022 [2024-07-15 16:15:12.835623] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:30.022 [2024-07-15 16:15:12.835700] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.022 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.022 [2024-07-15 16:15:12.904698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:30.022 [2024-07-15 16:15:12.998716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.022 [2024-07-15 16:15:12.998781] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.022 [2024-07-15 16:15:12.998799] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.022 [2024-07-15 16:15:12.998820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.022 [2024-07-15 16:15:12.998833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.022 [2024-07-15 16:15:12.998902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.022 [2024-07-15 16:15:12.998967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.022 [2024-07-15 16:15:12.998961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.280 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:30.537 [2024-07-15 16:15:13.358782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.537 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:30.794 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:30.794 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:31.052 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:31.052 16:15:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:31.309 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:31.567 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=47dbb0cd-d8af-4107-8d67-55671e1df87b 00:16:31.567 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47dbb0cd-d8af-4107-8d67-55671e1df87b lvol 20 00:16:31.825 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ce69b4e9-a0dc-4b84-8514-7e4e23070ecb 00:16:31.825 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:32.083 16:15:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce69b4e9-a0dc-4b84-8514-7e4e23070ecb 00:16:32.340 16:15:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:32.597 [2024-07-15 16:15:15.404115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.597 16:15:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.854 16:15:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=297413 00:16:32.854 16:15:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:32.854 16:15:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:32.854 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.786 16:15:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ce69b4e9-a0dc-4b84-8514-7e4e23070ecb MY_SNAPSHOT 00:16:34.043 16:15:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b826bc94-59ab-4186-b90a-3d304f349e77 00:16:34.044 16:15:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ce69b4e9-a0dc-4b84-8514-7e4e23070ecb 30 00:16:34.609 16:15:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b826bc94-59ab-4186-b90a-3d304f349e77 MY_CLONE 00:16:34.866 16:15:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28a73200-2a53-43a7-9138-1cb760dcf7a6 00:16:34.866 16:15:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28a73200-2a53-43a7-9138-1cb760dcf7a6 00:16:35.431 16:15:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 297413 00:16:43.570 Initializing NVMe Controllers 00:16:43.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:43.570 Controller IO queue size 128, less than required. 00:16:43.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:43.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:43.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:43.570 Initialization complete. Launching workers. 00:16:43.570 ======================================================== 00:16:43.570 Latency(us) 00:16:43.570 Device Information : IOPS MiB/s Average min max 00:16:43.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10694.35 41.77 11978.71 2060.46 80485.83 00:16:43.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10518.85 41.09 12171.00 2140.24 68453.81 00:16:43.570 ======================================================== 00:16:43.570 Total : 21213.20 82.86 12074.06 2060.46 80485.83 00:16:43.570 00:16:43.570 16:15:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.570 16:15:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce69b4e9-a0dc-4b84-8514-7e4e23070ecb 00:16:43.827 16:15:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47dbb0cd-d8af-4107-8d67-55671e1df87b 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.085 rmmod nvme_tcp 00:16:44.085 rmmod nvme_fabrics 00:16:44.085 rmmod nvme_keyring 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 296988 ']' 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 296988 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 296988 ']' 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 296988 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 296988 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 296988' 00:16:44.085 killing process with pid 296988 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 296988 00:16:44.085 16:15:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 296988 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.343 16:15:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.875 00:16:46.875 real 0m18.650s 00:16:46.875 user 1m4.106s 00:16:46.875 sys 0m5.544s 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.875 ************************************ 00:16:46.875 END TEST nvmf_lvol 00:16:46.875 ************************************ 00:16:46.875 16:15:29 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.875 16:15:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:46.875 16:15:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.875 16:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.875 ************************************ 00:16:46.875 START TEST nvmf_lvs_grow 00:16:46.875 ************************************ 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:46.875 * Looking for test storage... 00:16:46.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.875 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:46.876 16:15:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:48.778 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:48.778 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:48.778 Found net devices under 0000:84:00.0: cvl_0_0 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:48.778 Found net devices under 0000:84:00.1: cvl_0_1 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:16:48.778 00:16:48.778 --- 10.0.0.2 ping statistics --- 00:16:48.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.778 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:16:48.778 00:16:48.778 --- 10.0.0.1 ping statistics --- 00:16:48.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.778 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:48.778 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=300677 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 300677 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 300677 ']' 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:48.779 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.779 [2024-07-15 16:15:31.602973] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:48.779 [2024-07-15 16:15:31.603071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.779 [2024-07-15 16:15:31.666666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.779 [2024-07-15 16:15:31.750301] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.779 [2024-07-15 16:15:31.750354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.779 [2024-07-15 16:15:31.750378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.779 [2024-07-15 16:15:31.750389] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.779 [2024-07-15 16:15:31.750399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.779 [2024-07-15 16:15:31.750425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.037 16:15:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.294 [2024-07-15 16:15:32.153698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:49.294 ************************************ 00:16:49.294 START TEST lvs_grow_clean 00:16:49.294 ************************************ 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:49.294 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:49.552 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:49.552 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:49.810 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:16:49.810 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:16:49.810 16:15:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:50.376 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:50.376 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:50.376 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d lvol 150 00:16:50.634 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bafb0ea7-0abb-4deb-bd15-41e81fff7639 00:16:50.634 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:50.634 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:50.892 [2024-07-15 16:15:33.633320] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:50.892 [2024-07-15 16:15:33.633409] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:50.892 true 00:16:50.892 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:16:50.892 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:51.150 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:51.150 16:15:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.408 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bafb0ea7-0abb-4deb-bd15-41e81fff7639 00:16:51.666 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.666 [2024-07-15 16:15:34.632376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.935 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=301118 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 301118 /var/tmp/bdevperf.sock 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 301118 ']' 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.278 16:15:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:52.278 [2024-07-15 16:15:34.972036] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.278 [2024-07-15 16:15:34.972133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301118 ] 00:16:52.278 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.278 [2024-07-15 16:15:35.032636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.278 [2024-07-15 16:15:35.124569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.278 16:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.278 16:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:52.278 16:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:52.854 Nvme0n1 00:16:52.854 16:15:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:53.112 [ 00:16:53.112 { 00:16:53.112 "name": "Nvme0n1", 00:16:53.112 "aliases": [ 00:16:53.112 "bafb0ea7-0abb-4deb-bd15-41e81fff7639" 00:16:53.112 ], 00:16:53.112 "product_name": "NVMe disk", 00:16:53.112 "block_size": 4096, 00:16:53.112 "num_blocks": 38912, 00:16:53.112 "uuid": "bafb0ea7-0abb-4deb-bd15-41e81fff7639", 00:16:53.112 "assigned_rate_limits": { 00:16:53.112 "rw_ios_per_sec": 0, 00:16:53.112 "rw_mbytes_per_sec": 0, 00:16:53.112 "r_mbytes_per_sec": 0, 00:16:53.112 "w_mbytes_per_sec": 0 00:16:53.112 }, 00:16:53.112 "claimed": false, 00:16:53.112 "zoned": false, 00:16:53.112 "supported_io_types": { 00:16:53.112 "read": true, 00:16:53.112 "write": true, 00:16:53.112 "unmap": true, 00:16:53.112 "write_zeroes": true, 00:16:53.112 "flush": true, 00:16:53.112 "reset": true, 00:16:53.112 "compare": true, 00:16:53.112 "compare_and_write": true, 00:16:53.112 "abort": true, 00:16:53.112 "nvme_admin": true, 00:16:53.112 "nvme_io": true 00:16:53.112 }, 00:16:53.112 "memory_domains": [ 00:16:53.112 { 00:16:53.112 "dma_device_id": "system", 00:16:53.112 "dma_device_type": 1 00:16:53.112 } 00:16:53.112 ], 00:16:53.112 "driver_specific": { 00:16:53.112 "nvme": [ 00:16:53.112 { 00:16:53.112 "trid": { 00:16:53.112 "trtype": "TCP", 00:16:53.112 "adrfam": "IPv4", 00:16:53.112 "traddr": "10.0.0.2", 00:16:53.112 "trsvcid": "4420", 00:16:53.112 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.112 }, 00:16:53.112 "ctrlr_data": { 00:16:53.112 "cntlid": 1, 00:16:53.112 "vendor_id": "0x8086", 00:16:53.112 "model_number": "SPDK bdev Controller", 00:16:53.112 "serial_number": "SPDK0", 00:16:53.112 "firmware_revision": "24.05.1", 00:16:53.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.112 "oacs": { 00:16:53.112 "security": 0, 00:16:53.112 "format": 0, 00:16:53.112 "firmware": 0, 00:16:53.112 "ns_manage": 0 00:16:53.112 }, 00:16:53.112 "multi_ctrlr": true, 00:16:53.112 "ana_reporting": false 00:16:53.112 }, 00:16:53.112 "vs": { 00:16:53.112 "nvme_version": "1.3" 00:16:53.112 }, 00:16:53.112 "ns_data": { 00:16:53.112 "id": 1, 00:16:53.112 "can_share": true 00:16:53.112 } 00:16:53.112 } 00:16:53.112 ], 00:16:53.112 "mp_policy": "active_passive" 00:16:53.112 } 00:16:53.112 } 00:16:53.112 ] 00:16:53.112 16:15:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=301254 00:16:53.112 16:15:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.112 16:15:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:53.371 Running I/O for 10 seconds... 00:16:54.305 Latency(us) 00:16:54.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.305 Nvme0n1 : 1.00 14491.00 56.61 0.00 0.00 0.00 0.00 0.00 00:16:54.305 =================================================================================================================== 00:16:54.305 Total : 14491.00 56.61 0.00 0.00 0.00 0.00 0.00 00:16:54.305 00:16:55.238 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:16:55.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.238 Nvme0n1 : 2.00 15107.50 59.01 0.00 0.00 0.00 0.00 0.00 00:16:55.238 =================================================================================================================== 00:16:55.238 Total : 15107.50 59.01 0.00 0.00 0.00 0.00 0.00 00:16:55.238 00:16:55.497 true 00:16:55.497 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:16:55.497 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:55.755 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:55.755 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:55.755 16:15:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 301254 00:16:56.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.321 Nvme0n1 : 3.00 15069.67 58.87 0.00 0.00 0.00 0.00 0.00 00:16:56.321 =================================================================================================================== 00:16:56.321 Total : 15069.67 58.87 0.00 0.00 0.00 0.00 0.00 00:16:56.321 00:16:57.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.254 Nvme0n1 : 4.00 15064.50 58.85 0.00 0.00 0.00 0.00 0.00 00:16:57.254 =================================================================================================================== 00:16:57.254 Total : 15064.50 58.85 0.00 0.00 0.00 0.00 0.00 00:16:57.254 00:16:58.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.188 Nvme0n1 : 5.00 15209.40 59.41 0.00 0.00 0.00 0.00 0.00 00:16:58.188 =================================================================================================================== 00:16:58.188 Total : 15209.40 59.41 0.00 0.00 0.00 0.00 0.00 00:16:58.188 00:16:59.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.564 Nvme0n1 : 6.00 15251.17 59.57 0.00 0.00 0.00 0.00 0.00 00:16:59.564 =================================================================================================================== 00:16:59.564 Total : 15251.17 59.57 0.00 0.00 0.00 0.00 0.00 00:16:59.564 00:17:00.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.499 Nvme0n1 : 7.00 15398.00 60.15 0.00 0.00 0.00 0.00 0.00 00:17:00.499 =================================================================================================================== 00:17:00.499 Total : 15398.00 60.15 0.00 0.00 0.00 0.00 0.00 00:17:00.499 00:17:01.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.431 Nvme0n1 : 8.00 15428.50 60.27 0.00 0.00 0.00 0.00 0.00 00:17:01.431 =================================================================================================================== 00:17:01.431 Total : 15428.50 60.27 0.00 0.00 0.00 0.00 0.00 00:17:01.431 00:17:02.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.366 Nvme0n1 : 9.00 15472.67 60.44 0.00 0.00 0.00 0.00 0.00 00:17:02.366 =================================================================================================================== 00:17:02.366 Total : 15472.67 60.44 0.00 0.00 0.00 0.00 0.00 00:17:02.366 00:17:03.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.300 Nvme0n1 : 10.00 15565.80 60.80 0.00 0.00 0.00 0.00 0.00 00:17:03.300 =================================================================================================================== 00:17:03.300 Total : 15565.80 60.80 0.00 0.00 0.00 0.00 0.00 00:17:03.300 00:17:03.300 00:17:03.300 Latency(us) 00:17:03.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.300 Nvme0n1 : 10.01 15567.12 60.81 0.00 0.00 8217.96 4126.34 16602.45 00:17:03.300 =================================================================================================================== 00:17:03.300 Total : 15567.12 60.81 0.00 0.00 8217.96 4126.34 16602.45 00:17:03.300 0 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 301118 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 301118 ']' 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 301118 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 301118 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 301118' 00:17:03.300 killing process with pid 301118 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 301118 00:17:03.300 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.300 00:17:03.300 Latency(us) 00:17:03.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.300 =================================================================================================================== 00:17:03.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.300 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 301118 00:17:03.558 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.816 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:04.074 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:04.074 16:15:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:04.332 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:04.332 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:04.332 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:04.590 [2024-07-15 16:15:47.484628] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:04.590 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:04.847 request: 00:17:04.847 { 00:17:04.847 "uuid": "7eba28b9-c0c4-4d40-8c8d-47308385ee3d", 00:17:04.847 "method": "bdev_lvol_get_lvstores", 00:17:04.847 "req_id": 1 00:17:04.847 } 00:17:04.847 Got JSON-RPC error response 00:17:04.847 response: 00:17:04.847 { 00:17:04.847 "code": -19, 00:17:04.847 "message": "No such device" 00:17:04.847 } 00:17:04.847 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:04.847 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.848 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.848 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.848 16:15:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:05.105 aio_bdev 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bafb0ea7-0abb-4deb-bd15-41e81fff7639 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=bafb0ea7-0abb-4deb-bd15-41e81fff7639 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:05.105 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:05.362 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bafb0ea7-0abb-4deb-bd15-41e81fff7639 -t 2000 00:17:05.619 [ 00:17:05.619 { 00:17:05.619 "name": "bafb0ea7-0abb-4deb-bd15-41e81fff7639", 00:17:05.619 "aliases": [ 00:17:05.619 "lvs/lvol" 00:17:05.619 ], 00:17:05.619 "product_name": "Logical Volume", 00:17:05.619 "block_size": 4096, 00:17:05.619 "num_blocks": 38912, 00:17:05.619 "uuid": "bafb0ea7-0abb-4deb-bd15-41e81fff7639", 00:17:05.619 "assigned_rate_limits": { 00:17:05.619 "rw_ios_per_sec": 0, 00:17:05.619 "rw_mbytes_per_sec": 0, 00:17:05.619 "r_mbytes_per_sec": 0, 00:17:05.619 "w_mbytes_per_sec": 0 00:17:05.619 }, 00:17:05.619 "claimed": false, 00:17:05.619 "zoned": false, 00:17:05.619 "supported_io_types": { 00:17:05.619 "read": true, 00:17:05.619 "write": true, 00:17:05.619 "unmap": true, 00:17:05.619 "write_zeroes": true, 00:17:05.619 "flush": false, 00:17:05.619 "reset": true, 00:17:05.619 "compare": false, 00:17:05.619 "compare_and_write": false, 00:17:05.619 "abort": false, 00:17:05.619 "nvme_admin": false, 00:17:05.619 "nvme_io": false 00:17:05.619 }, 00:17:05.619 "driver_specific": { 00:17:05.619 "lvol": { 00:17:05.619 "lvol_store_uuid": "7eba28b9-c0c4-4d40-8c8d-47308385ee3d", 00:17:05.619 "base_bdev": "aio_bdev", 00:17:05.619 "thin_provision": false, 00:17:05.619 "num_allocated_clusters": 38, 00:17:05.619 "snapshot": false, 00:17:05.619 "clone": false, 00:17:05.619 "esnap_clone": false 00:17:05.619 } 00:17:05.619 } 00:17:05.619 } 00:17:05.619 ] 00:17:05.619 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:05.619 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:05.619 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:05.876 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:05.876 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:05.876 16:15:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:06.133 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:06.133 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bafb0ea7-0abb-4deb-bd15-41e81fff7639 00:17:06.390 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7eba28b9-c0c4-4d40-8c8d-47308385ee3d 00:17:06.646 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.903 00:17:06.903 real 0m17.644s 00:17:06.903 user 0m17.133s 00:17:06.903 sys 0m1.902s 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:06.903 ************************************ 00:17:06.903 END TEST lvs_grow_clean 00:17:06.903 ************************************ 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:06.903 16:15:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:07.160 ************************************ 00:17:07.160 START TEST lvs_grow_dirty 00:17:07.160 ************************************ 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:07.160 16:15:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.417 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:07.417 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:07.674 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:07.674 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:07.674 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:07.931 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:07.931 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:07.931 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 lvol 150 00:17:08.187 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:08.187 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.187 16:15:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:08.445 [2024-07-15 16:15:51.263117] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:08.445 [2024-07-15 16:15:51.263217] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:08.445 true 00:17:08.445 16:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:08.445 16:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:08.702 16:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:08.702 16:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:08.960 16:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:09.217 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.474 [2024-07-15 16:15:52.314241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.474 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=303167 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 303167 /var/tmp/bdevperf.sock 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 303167 ']' 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.731 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.731 [2024-07-15 16:15:52.617465] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:09.731 [2024-07-15 16:15:52.617552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303167 ] 00:17:09.731 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.731 [2024-07-15 16:15:52.683846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.988 [2024-07-15 16:15:52.775819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.988 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.988 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:09.988 16:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:10.552 Nvme0n1 00:17:10.552 16:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:10.552 [ 00:17:10.552 { 00:17:10.552 "name": "Nvme0n1", 00:17:10.552 "aliases": [ 00:17:10.552 "1cfc1a80-421b-4f4c-bead-719e90d901a1" 00:17:10.552 ], 00:17:10.552 "product_name": "NVMe disk", 00:17:10.552 "block_size": 4096, 00:17:10.552 "num_blocks": 38912, 00:17:10.552 "uuid": "1cfc1a80-421b-4f4c-bead-719e90d901a1", 00:17:10.552 "assigned_rate_limits": { 00:17:10.552 "rw_ios_per_sec": 0, 00:17:10.552 "rw_mbytes_per_sec": 0, 00:17:10.552 "r_mbytes_per_sec": 0, 00:17:10.552 "w_mbytes_per_sec": 0 00:17:10.552 }, 00:17:10.552 "claimed": false, 00:17:10.552 "zoned": false, 00:17:10.552 "supported_io_types": { 00:17:10.552 "read": true, 00:17:10.552 "write": true, 00:17:10.552 "unmap": true, 00:17:10.552 "write_zeroes": true, 00:17:10.552 "flush": true, 00:17:10.552 "reset": true, 00:17:10.552 "compare": true, 00:17:10.552 "compare_and_write": true, 00:17:10.552 "abort": true, 00:17:10.552 "nvme_admin": true, 00:17:10.552 "nvme_io": true 00:17:10.552 }, 00:17:10.552 "memory_domains": [ 00:17:10.552 { 00:17:10.552 "dma_device_id": "system", 00:17:10.552 "dma_device_type": 1 00:17:10.552 } 00:17:10.552 ], 00:17:10.552 "driver_specific": { 00:17:10.552 "nvme": [ 00:17:10.552 { 00:17:10.552 "trid": { 00:17:10.552 "trtype": "TCP", 00:17:10.552 "adrfam": "IPv4", 00:17:10.552 "traddr": "10.0.0.2", 00:17:10.552 "trsvcid": "4420", 00:17:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:10.552 }, 00:17:10.552 "ctrlr_data": { 00:17:10.552 "cntlid": 1, 00:17:10.552 "vendor_id": "0x8086", 00:17:10.552 "model_number": "SPDK bdev Controller", 00:17:10.552 "serial_number": "SPDK0", 00:17:10.552 "firmware_revision": "24.05.1", 00:17:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.552 "oacs": { 00:17:10.552 "security": 0, 00:17:10.552 "format": 0, 00:17:10.552 "firmware": 0, 00:17:10.552 "ns_manage": 0 00:17:10.552 }, 00:17:10.552 "multi_ctrlr": true, 00:17:10.552 "ana_reporting": false 00:17:10.552 }, 00:17:10.552 "vs": { 00:17:10.552 "nvme_version": "1.3" 00:17:10.552 }, 00:17:10.552 "ns_data": { 00:17:10.552 "id": 1, 00:17:10.552 "can_share": true 00:17:10.552 } 00:17:10.552 } 00:17:10.552 ], 00:17:10.552 "mp_policy": "active_passive" 00:17:10.552 } 00:17:10.552 } 00:17:10.552 ] 00:17:10.552 16:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=303302 00:17:10.552 16:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.552 16:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:10.809 Running I/O for 10 seconds... 00:17:11.742 Latency(us) 00:17:11.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.742 Nvme0n1 : 1.00 14499.00 56.64 0.00 0.00 0.00 0.00 0.00 00:17:11.742 =================================================================================================================== 00:17:11.742 Total : 14499.00 56.64 0.00 0.00 0.00 0.00 0.00 00:17:11.742 00:17:12.675 16:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:12.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.675 Nvme0n1 : 2.00 14558.50 56.87 0.00 0.00 0.00 0.00 0.00 00:17:12.675 =================================================================================================================== 00:17:12.675 Total : 14558.50 56.87 0.00 0.00 0.00 0.00 0.00 00:17:12.675 00:17:12.933 true 00:17:12.933 16:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:12.933 16:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:13.191 16:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:13.191 16:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:13.191 16:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 303302 00:17:13.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.761 Nvme0n1 : 3.00 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:17:13.761 =================================================================================================================== 00:17:13.761 Total : 14701.00 57.43 0.00 0.00 0.00 0.00 0.00 00:17:13.761 00:17:14.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.743 Nvme0n1 : 4.00 14773.75 57.71 0.00 0.00 0.00 0.00 0.00 00:17:14.743 =================================================================================================================== 00:17:14.743 Total : 14773.75 57.71 0.00 0.00 0.00 0.00 0.00 00:17:14.743 00:17:15.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.676 Nvme0n1 : 5.00 14821.60 57.90 0.00 0.00 0.00 0.00 0.00 00:17:15.676 =================================================================================================================== 00:17:15.676 Total : 14821.60 57.90 0.00 0.00 0.00 0.00 0.00 00:17:15.676 00:17:17.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.047 Nvme0n1 : 6.00 14872.33 58.10 0.00 0.00 0.00 0.00 0.00 00:17:17.047 =================================================================================================================== 00:17:17.047 Total : 14872.33 58.10 0.00 0.00 0.00 0.00 0.00 00:17:17.047 00:17:17.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.980 Nvme0n1 : 7.00 14898.71 58.20 0.00 0.00 0.00 0.00 0.00 00:17:17.980 =================================================================================================================== 00:17:17.980 Total : 14898.71 58.20 0.00 0.00 0.00 0.00 0.00 00:17:17.980 00:17:18.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.913 Nvme0n1 : 8.00 14942.12 58.37 0.00 0.00 0.00 0.00 0.00 00:17:18.913 =================================================================================================================== 00:17:18.913 Total : 14942.12 58.37 0.00 0.00 0.00 0.00 0.00 00:17:18.913 00:17:19.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.847 Nvme0n1 : 9.00 14975.56 58.50 0.00 0.00 0.00 0.00 0.00 00:17:19.847 =================================================================================================================== 00:17:19.847 Total : 14975.56 58.50 0.00 0.00 0.00 0.00 0.00 00:17:19.847 00:17:20.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.779 Nvme0n1 : 10.00 15093.50 58.96 0.00 0.00 0.00 0.00 0.00 00:17:20.779 =================================================================================================================== 00:17:20.779 Total : 15093.50 58.96 0.00 0.00 0.00 0.00 0.00 00:17:20.779 00:17:20.779 00:17:20.779 Latency(us) 00:17:20.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.779 Nvme0n1 : 10.00 15101.24 58.99 0.00 0.00 8471.50 2233.08 17185.00 00:17:20.779 =================================================================================================================== 00:17:20.779 Total : 15101.24 58.99 0.00 0.00 8471.50 2233.08 17185.00 00:17:20.779 0 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 303167 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 303167 ']' 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 303167 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303167 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303167' 00:17:20.779 killing process with pid 303167 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 303167 00:17:20.779 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.779 00:17:20.779 Latency(us) 00:17:20.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.779 =================================================================================================================== 00:17:20.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.779 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 303167 00:17:21.037 16:16:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.295 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.553 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:21.553 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 300677 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 300677 00:17:21.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 300677 Killed "${NVMF_APP[@]}" "$@" 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=304627 00:17:21.812 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 304627 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 304627 ']' 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:21.813 16:16:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.072 [2024-07-15 16:16:04.798816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:22.072 [2024-07-15 16:16:04.798893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.072 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.072 [2024-07-15 16:16:04.863687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.072 [2024-07-15 16:16:04.947684] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.072 [2024-07-15 16:16:04.947765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.072 [2024-07-15 16:16:04.947781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.072 [2024-07-15 16:16:04.947801] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.072 [2024-07-15 16:16:04.947811] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.072 [2024-07-15 16:16:04.947837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.072 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.072 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:22.072 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.072 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.072 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.329 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.329 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.329 [2024-07-15 16:16:05.299513] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:22.329 [2024-07-15 16:16:05.299663] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:22.329 [2024-07-15 16:16:05.299719] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:22.587 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:22.845 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cfc1a80-421b-4f4c-bead-719e90d901a1 -t 2000 00:17:23.103 [ 00:17:23.103 { 00:17:23.103 "name": "1cfc1a80-421b-4f4c-bead-719e90d901a1", 00:17:23.103 "aliases": [ 00:17:23.103 "lvs/lvol" 00:17:23.103 ], 00:17:23.103 "product_name": "Logical Volume", 00:17:23.103 "block_size": 4096, 00:17:23.103 "num_blocks": 38912, 00:17:23.103 "uuid": "1cfc1a80-421b-4f4c-bead-719e90d901a1", 00:17:23.103 "assigned_rate_limits": { 00:17:23.103 "rw_ios_per_sec": 0, 00:17:23.103 "rw_mbytes_per_sec": 0, 00:17:23.103 "r_mbytes_per_sec": 0, 00:17:23.103 "w_mbytes_per_sec": 0 00:17:23.103 }, 00:17:23.103 "claimed": false, 00:17:23.103 "zoned": false, 00:17:23.103 "supported_io_types": { 00:17:23.103 "read": true, 00:17:23.103 "write": true, 00:17:23.103 "unmap": true, 00:17:23.103 "write_zeroes": true, 00:17:23.103 "flush": false, 00:17:23.103 "reset": true, 00:17:23.103 "compare": false, 00:17:23.103 "compare_and_write": false, 00:17:23.103 "abort": false, 00:17:23.103 "nvme_admin": false, 00:17:23.103 "nvme_io": false 00:17:23.103 }, 00:17:23.103 "driver_specific": { 00:17:23.103 "lvol": { 00:17:23.103 "lvol_store_uuid": "1b331a59-de1c-4b0c-bd18-fe8a08568ac3", 00:17:23.103 "base_bdev": "aio_bdev", 00:17:23.103 "thin_provision": false, 00:17:23.103 "num_allocated_clusters": 38, 00:17:23.103 "snapshot": false, 00:17:23.103 "clone": false, 00:17:23.103 "esnap_clone": false 00:17:23.103 } 00:17:23.103 } 00:17:23.103 } 00:17:23.103 ] 00:17:23.103 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:23.103 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:23.103 16:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:23.361 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:23.361 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:23.361 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:23.619 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:23.619 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.877 [2024-07-15 16:16:06.620645] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.877 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:24.135 request: 00:17:24.135 { 00:17:24.135 "uuid": "1b331a59-de1c-4b0c-bd18-fe8a08568ac3", 00:17:24.135 "method": "bdev_lvol_get_lvstores", 00:17:24.135 "req_id": 1 00:17:24.135 } 00:17:24.135 Got JSON-RPC error response 00:17:24.135 response: 00:17:24.135 { 00:17:24.135 "code": -19, 00:17:24.135 "message": "No such device" 00:17:24.135 } 00:17:24.135 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:24.135 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.135 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.135 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.135 16:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.393 aio_bdev 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:24.393 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:24.651 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1cfc1a80-421b-4f4c-bead-719e90d901a1 -t 2000 00:17:24.908 [ 00:17:24.908 { 00:17:24.908 "name": "1cfc1a80-421b-4f4c-bead-719e90d901a1", 00:17:24.908 "aliases": [ 00:17:24.908 "lvs/lvol" 00:17:24.908 ], 00:17:24.908 "product_name": "Logical Volume", 00:17:24.908 "block_size": 4096, 00:17:24.908 "num_blocks": 38912, 00:17:24.908 "uuid": "1cfc1a80-421b-4f4c-bead-719e90d901a1", 00:17:24.908 "assigned_rate_limits": { 00:17:24.908 "rw_ios_per_sec": 0, 00:17:24.908 "rw_mbytes_per_sec": 0, 00:17:24.908 "r_mbytes_per_sec": 0, 00:17:24.908 "w_mbytes_per_sec": 0 00:17:24.908 }, 00:17:24.908 "claimed": false, 00:17:24.908 "zoned": false, 00:17:24.908 "supported_io_types": { 00:17:24.908 "read": true, 00:17:24.908 "write": true, 00:17:24.908 "unmap": true, 00:17:24.908 "write_zeroes": true, 00:17:24.908 "flush": false, 00:17:24.908 "reset": true, 00:17:24.908 "compare": false, 00:17:24.908 "compare_and_write": false, 00:17:24.908 "abort": false, 00:17:24.908 "nvme_admin": false, 00:17:24.908 "nvme_io": false 00:17:24.908 }, 00:17:24.908 "driver_specific": { 00:17:24.908 "lvol": { 00:17:24.908 "lvol_store_uuid": "1b331a59-de1c-4b0c-bd18-fe8a08568ac3", 00:17:24.908 "base_bdev": "aio_bdev", 00:17:24.908 "thin_provision": false, 00:17:24.908 "num_allocated_clusters": 38, 00:17:24.908 "snapshot": false, 00:17:24.908 "clone": false, 00:17:24.908 "esnap_clone": false 00:17:24.908 } 00:17:24.908 } 00:17:24.908 } 00:17:24.908 ] 00:17:24.908 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:24.908 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:24.908 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:25.166 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:25.166 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:25.166 16:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:25.424 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:25.424 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1cfc1a80-421b-4f4c-bead-719e90d901a1 00:17:25.682 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b331a59-de1c-4b0c-bd18-fe8a08568ac3 00:17:25.940 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.196 00:17:26.196 real 0m19.078s 00:17:26.196 user 0m48.281s 00:17:26.196 sys 0m5.027s 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:26.196 ************************************ 00:17:26.196 END TEST lvs_grow_dirty 00:17:26.196 ************************************ 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:26.196 16:16:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.196 nvmf_trace.0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.196 rmmod nvme_tcp 00:17:26.196 rmmod nvme_fabrics 00:17:26.196 rmmod nvme_keyring 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 304627 ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 304627 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 304627 ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 304627 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 304627 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 304627' 00:17:26.196 killing process with pid 304627 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 304627 00:17:26.196 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 304627 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.453 16:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.982 16:16:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.982 00:17:28.982 real 0m42.092s 00:17:28.982 user 1m11.087s 00:17:28.982 sys 0m8.849s 00:17:28.982 16:16:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:28.982 16:16:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 ************************************ 00:17:28.982 END TEST nvmf_lvs_grow 00:17:28.982 ************************************ 00:17:28.982 16:16:11 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.982 16:16:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:28.982 16:16:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:28.982 16:16:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 ************************************ 00:17:28.982 START TEST nvmf_bdev_io_wait 00:17:28.982 ************************************ 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:28.982 * Looking for test storage... 00:17:28.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.982 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.983 16:16:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:30.884 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:30.884 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.884 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:30.885 Found net devices under 0000:84:00.0: cvl_0_0 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:30.885 Found net devices under 0000:84:00.1: cvl_0_1 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:17:30.885 00:17:30.885 --- 10.0.0.2 ping statistics --- 00:17:30.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.885 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:17:30.885 00:17:30.885 --- 10.0.0.1 ping statistics --- 00:17:30.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.885 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=307155 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 307155 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 307155 ']' 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:30.885 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:30.885 [2024-07-15 16:16:13.690790] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:30.885 [2024-07-15 16:16:13.690867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.885 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.885 [2024-07-15 16:16:13.761999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.885 [2024-07-15 16:16:13.854590] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.885 [2024-07-15 16:16:13.854652] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.885 [2024-07-15 16:16:13.854676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.885 [2024-07-15 16:16:13.854690] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.885 [2024-07-15 16:16:13.854701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.885 [2024-07-15 16:16:13.854788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.885 [2024-07-15 16:16:13.854848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.885 [2024-07-15 16:16:13.854907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.885 [2024-07-15 16:16:13.854910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 [2024-07-15 16:16:13.991252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 Malloc0 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:31.145 [2024-07-15 16:16:14.049032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=307183 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=307185 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.145 { 00:17:31.145 "params": { 00:17:31.145 "name": "Nvme$subsystem", 00:17:31.145 "trtype": "$TEST_TRANSPORT", 00:17:31.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.145 "adrfam": "ipv4", 00:17:31.145 "trsvcid": "$NVMF_PORT", 00:17:31.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.145 "hdgst": ${hdgst:-false}, 00:17:31.145 "ddgst": ${ddgst:-false} 00:17:31.145 }, 00:17:31.145 "method": "bdev_nvme_attach_controller" 00:17:31.145 } 00:17:31.145 EOF 00:17:31.145 )") 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=307187 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.145 { 00:17:31.145 "params": { 00:17:31.145 "name": "Nvme$subsystem", 00:17:31.145 "trtype": "$TEST_TRANSPORT", 00:17:31.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.145 "adrfam": "ipv4", 00:17:31.145 "trsvcid": "$NVMF_PORT", 00:17:31.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.145 "hdgst": ${hdgst:-false}, 00:17:31.145 "ddgst": ${ddgst:-false} 00:17:31.145 }, 00:17:31.145 "method": "bdev_nvme_attach_controller" 00:17:31.145 } 00:17:31.145 EOF 00:17:31.145 )") 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=307190 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.145 { 00:17:31.145 "params": { 00:17:31.145 "name": "Nvme$subsystem", 00:17:31.145 "trtype": "$TEST_TRANSPORT", 00:17:31.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.145 "adrfam": "ipv4", 00:17:31.145 "trsvcid": "$NVMF_PORT", 00:17:31.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.145 "hdgst": ${hdgst:-false}, 00:17:31.145 "ddgst": ${ddgst:-false} 00:17:31.145 }, 00:17:31.145 "method": "bdev_nvme_attach_controller" 00:17:31.145 } 00:17:31.145 EOF 00:17:31.145 )") 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:31.145 { 00:17:31.145 "params": { 00:17:31.145 "name": "Nvme$subsystem", 00:17:31.145 "trtype": "$TEST_TRANSPORT", 00:17:31.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.145 "adrfam": "ipv4", 00:17:31.145 "trsvcid": "$NVMF_PORT", 00:17:31.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.145 "hdgst": ${hdgst:-false}, 00:17:31.145 "ddgst": ${ddgst:-false} 00:17:31.145 }, 00:17:31.145 "method": "bdev_nvme_attach_controller" 00:17:31.145 } 00:17:31.145 EOF 00:17:31.145 )") 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 307183 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.145 "params": { 00:17:31.145 "name": "Nvme1", 00:17:31.145 "trtype": "tcp", 00:17:31.145 "traddr": "10.0.0.2", 00:17:31.145 "adrfam": "ipv4", 00:17:31.145 "trsvcid": "4420", 00:17:31.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.145 "hdgst": false, 00:17:31.145 "ddgst": false 00:17:31.145 }, 00:17:31.145 "method": "bdev_nvme_attach_controller" 00:17:31.145 }' 00:17:31.145 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.146 "params": { 00:17:31.146 "name": "Nvme1", 00:17:31.146 "trtype": "tcp", 00:17:31.146 "traddr": "10.0.0.2", 00:17:31.146 "adrfam": "ipv4", 00:17:31.146 "trsvcid": "4420", 00:17:31.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.146 "hdgst": false, 00:17:31.146 "ddgst": false 00:17:31.146 }, 00:17:31.146 "method": "bdev_nvme_attach_controller" 00:17:31.146 }' 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.146 "params": { 00:17:31.146 "name": "Nvme1", 00:17:31.146 "trtype": "tcp", 00:17:31.146 "traddr": "10.0.0.2", 00:17:31.146 "adrfam": "ipv4", 00:17:31.146 "trsvcid": "4420", 00:17:31.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.146 "hdgst": false, 00:17:31.146 "ddgst": false 00:17:31.146 }, 00:17:31.146 "method": "bdev_nvme_attach_controller" 00:17:31.146 }' 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:31.146 16:16:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:31.146 "params": { 00:17:31.146 "name": "Nvme1", 00:17:31.146 "trtype": "tcp", 00:17:31.146 "traddr": "10.0.0.2", 00:17:31.146 "adrfam": "ipv4", 00:17:31.146 "trsvcid": "4420", 00:17:31.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.146 "hdgst": false, 00:17:31.146 "ddgst": false 00:17:31.146 }, 00:17:31.146 "method": "bdev_nvme_attach_controller" 00:17:31.146 }' 00:17:31.146 [2024-07-15 16:16:14.094483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:31.146 [2024-07-15 16:16:14.094483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:31.146 [2024-07-15 16:16:14.094571] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:16:14.094572] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:31.146 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:31.146 [2024-07-15 16:16:14.095885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:31.146 [2024-07-15 16:16:14.095885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:31.146 [2024-07-15 16:16:14.095963] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 16:16:14.095963] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:31.146 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:31.404 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.404 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.404 [2024-07-15 16:16:14.274695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.404 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.404 [2024-07-15 16:16:14.353315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:31.404 [2024-07-15 16:16:14.379330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.662 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.662 [2024-07-15 16:16:14.445502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.662 [2024-07-15 16:16:14.452556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:31.662 [2024-07-15 16:16:14.515454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.662 [2024-07-15 16:16:14.517910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:31.662 [2024-07-15 16:16:14.588176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:31.920 Running I/O for 1 seconds... 00:17:31.920 Running I/O for 1 seconds... 00:17:31.920 Running I/O for 1 seconds... 00:17:31.920 Running I/O for 1 seconds... 00:17:32.855 00:17:32.855 Latency(us) 00:17:32.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.855 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:32.855 Nvme1n1 : 1.01 11645.82 45.49 0.00 0.00 10950.88 6213.78 20777.34 00:17:32.855 =================================================================================================================== 00:17:32.855 Total : 11645.82 45.49 0.00 0.00 10950.88 6213.78 20777.34 00:17:32.855 00:17:32.855 Latency(us) 00:17:32.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.855 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:32.855 Nvme1n1 : 1.02 5394.03 21.07 0.00 0.00 23443.90 7912.87 34369.99 00:17:32.855 =================================================================================================================== 00:17:32.855 Total : 5394.03 21.07 0.00 0.00 23443.90 7912.87 34369.99 00:17:33.113 00:17:33.113 Latency(us) 00:17:33.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.113 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:33.113 Nvme1n1 : 1.00 199793.67 780.44 0.00 0.00 637.99 282.17 773.69 00:17:33.113 =================================================================================================================== 00:17:33.113 Total : 199793.67 780.44 0.00 0.00 637.99 282.17 773.69 00:17:33.113 00:17:33.113 Latency(us) 00:17:33.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.113 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:33.113 Nvme1n1 : 1.01 5404.60 21.11 0.00 0.00 23569.96 9126.49 50875.35 00:17:33.113 =================================================================================================================== 00:17:33.113 Total : 5404.60 21.11 0.00 0.00 23569.96 9126.49 50875.35 00:17:33.113 16:16:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 307185 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 307187 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 307190 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.371 rmmod nvme_tcp 00:17:33.371 rmmod nvme_fabrics 00:17:33.371 rmmod nvme_keyring 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 307155 ']' 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 307155 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 307155 ']' 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 307155 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 307155 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 307155' 00:17:33.371 killing process with pid 307155 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 307155 00:17:33.371 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 307155 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.630 16:16:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.533 16:16:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.533 00:17:35.533 real 0m7.056s 00:17:35.533 user 0m16.378s 00:17:35.533 sys 0m3.399s 00:17:35.533 16:16:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.533 16:16:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:35.533 ************************************ 00:17:35.533 END TEST nvmf_bdev_io_wait 00:17:35.533 ************************************ 00:17:35.791 16:16:18 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:35.791 16:16:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:35.791 16:16:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:35.791 16:16:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.791 ************************************ 00:17:35.791 START TEST nvmf_queue_depth 00:17:35.791 ************************************ 00:17:35.791 16:16:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:35.791 * Looking for test storage... 00:17:35.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.791 16:16:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.791 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:35.791 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.791 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.792 16:16:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.693 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.693 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.693 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.693 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.693 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:37.694 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:37.694 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:37.694 Found net devices under 0000:84:00.0: cvl_0_0 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:37.694 Found net devices under 0000:84:00.1: cvl_0_1 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:37.694 00:17:37.694 --- 10.0.0.2 ping statistics --- 00:17:37.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.694 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:17:37.694 00:17:37.694 --- 10.0.0.1 ping statistics --- 00:17:37.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.694 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.694 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=309419 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 309419 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 309419 ']' 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.988 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.988 [2024-07-15 16:16:20.718402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:37.988 [2024-07-15 16:16:20.718476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.988 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.988 [2024-07-15 16:16:20.790702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.988 [2024-07-15 16:16:20.880886] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.988 [2024-07-15 16:16:20.880946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.988 [2024-07-15 16:16:20.880964] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.988 [2024-07-15 16:16:20.880978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.988 [2024-07-15 16:16:20.880989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.988 [2024-07-15 16:16:20.881018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.271 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.271 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:38.271 16:16:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.271 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.271 16:16:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 [2024-07-15 16:16:21.018907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 Malloc0 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 [2024-07-15 16:16:21.077243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=309449 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 309449 /var/tmp/bdevperf.sock 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 309449 ']' 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.271 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.271 [2024-07-15 16:16:21.122552] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:38.271 [2024-07-15 16:16:21.122613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309449 ] 00:17:38.271 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.271 [2024-07-15 16:16:21.184345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.529 [2024-07-15 16:16:21.276136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.529 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.529 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:38.529 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:38.529 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.529 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:38.786 NVMe0n1 00:17:38.786 16:16:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.786 16:16:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.786 Running I/O for 10 seconds... 00:17:50.983 00:17:50.983 Latency(us) 00:17:50.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.983 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:50.983 Verification LBA range: start 0x0 length 0x4000 00:17:50.983 NVMe0n1 : 10.09 8773.17 34.27 0.00 0.00 116137.81 24272.59 74565.40 00:17:50.983 =================================================================================================================== 00:17:50.983 Total : 8773.17 34.27 0.00 0.00 116137.81 24272.59 74565.40 00:17:50.983 0 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 309449 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 309449 ']' 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 309449 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 309449 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 309449' 00:17:50.983 killing process with pid 309449 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 309449 00:17:50.983 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.983 00:17:50.983 Latency(us) 00:17:50.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.983 =================================================================================================================== 00:17:50.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.983 16:16:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 309449 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.983 rmmod nvme_tcp 00:17:50.983 rmmod nvme_fabrics 00:17:50.983 rmmod nvme_keyring 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 309419 ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 309419 ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 309419' 00:17:50.983 killing process with pid 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 309419 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.983 16:16:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.548 16:16:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.548 00:17:51.548 real 0m15.894s 00:17:51.548 user 0m22.314s 00:17:51.548 sys 0m3.151s 00:17:51.548 16:16:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:51.548 16:16:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.548 ************************************ 00:17:51.548 END TEST nvmf_queue_depth 00:17:51.548 ************************************ 00:17:51.548 16:16:34 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:51.548 16:16:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:51.548 16:16:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:51.548 16:16:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.548 ************************************ 00:17:51.548 START TEST nvmf_target_multipath 00:17:51.548 ************************************ 00:17:51.548 16:16:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:51.807 * Looking for test storage... 00:17:51.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.807 16:16:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:53.705 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:53.705 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.705 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:53.706 Found net devices under 0000:84:00.0: cvl_0_0 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:53.706 Found net devices under 0000:84:00.1: cvl_0_1 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:53.706 00:17:53.706 --- 10.0.0.2 ping statistics --- 00:17:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.706 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:17:53.706 00:17:53.706 --- 10.0.0.1 ping statistics --- 00:17:53.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.706 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:53.706 only one NIC for nvmf test 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.706 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.706 rmmod nvme_tcp 00:17:53.706 rmmod nvme_fabrics 00:17:53.964 rmmod nvme_keyring 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.964 16:16:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.865 00:17:55.865 real 0m4.263s 00:17:55.865 user 0m0.747s 00:17:55.865 sys 0m1.501s 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.865 16:16:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:55.865 ************************************ 00:17:55.865 END TEST nvmf_target_multipath 00:17:55.865 ************************************ 00:17:55.865 16:16:38 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:55.865 16:16:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:55.865 16:16:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.865 16:16:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.865 ************************************ 00:17:55.865 START TEST nvmf_zcopy 00:17:55.865 ************************************ 00:17:55.865 16:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:56.124 * Looking for test storage... 00:17:56.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.124 16:16:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.125 16:16:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.023 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:58.024 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:58.024 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:58.024 Found net devices under 0000:84:00.0: cvl_0_0 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:58.024 Found net devices under 0000:84:00.1: cvl_0_1 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:17:58.024 00:17:58.024 --- 10.0.0.2 ping statistics --- 00:17:58.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.024 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:17:58.024 00:17:58.024 --- 10.0.0.1 ping statistics --- 00:17:58.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.024 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:58.024 16:16:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=314640 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 314640 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 314640 ']' 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:58.283 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.283 [2024-07-15 16:16:41.066129] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:58.283 [2024-07-15 16:16:41.066213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.283 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.283 [2024-07-15 16:16:41.131696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.283 [2024-07-15 16:16:41.220675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.283 [2024-07-15 16:16:41.220744] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.283 [2024-07-15 16:16:41.220759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.283 [2024-07-15 16:16:41.220770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.283 [2024-07-15 16:16:41.220779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.283 [2024-07-15 16:16:41.220806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 [2024-07-15 16:16:41.367458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 [2024-07-15 16:16:41.383672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 malloc0 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:58.541 { 00:17:58.541 "params": { 00:17:58.541 "name": "Nvme$subsystem", 00:17:58.541 "trtype": "$TEST_TRANSPORT", 00:17:58.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.541 "adrfam": "ipv4", 00:17:58.541 "trsvcid": "$NVMF_PORT", 00:17:58.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.541 "hdgst": ${hdgst:-false}, 00:17:58.541 "ddgst": ${ddgst:-false} 00:17:58.541 }, 00:17:58.541 "method": "bdev_nvme_attach_controller" 00:17:58.541 } 00:17:58.541 EOF 00:17:58.541 )") 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:58.541 16:16:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:58.541 "params": { 00:17:58.541 "name": "Nvme1", 00:17:58.541 "trtype": "tcp", 00:17:58.541 "traddr": "10.0.0.2", 00:17:58.541 "adrfam": "ipv4", 00:17:58.541 "trsvcid": "4420", 00:17:58.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.541 "hdgst": false, 00:17:58.541 "ddgst": false 00:17:58.541 }, 00:17:58.541 "method": "bdev_nvme_attach_controller" 00:17:58.541 }' 00:17:58.541 [2024-07-15 16:16:41.465871] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:58.541 [2024-07-15 16:16:41.465948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314661 ] 00:17:58.541 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.799 [2024-07-15 16:16:41.535868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.799 [2024-07-15 16:16:41.626656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.056 Running I/O for 10 seconds... 00:18:09.042 00:18:09.042 Latency(us) 00:18:09.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.042 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:09.042 Verification LBA range: start 0x0 length 0x1000 00:18:09.042 Nvme1n1 : 10.01 5770.15 45.08 0.00 0.00 22123.66 867.75 33204.91 00:18:09.042 =================================================================================================================== 00:18:09.042 Total : 5770.15 45.08 0.00 0.00 22123.66 867.75 33204.91 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=315856 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:09.300 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:09.301 { 00:18:09.301 "params": { 00:18:09.301 "name": "Nvme$subsystem", 00:18:09.301 "trtype": "$TEST_TRANSPORT", 00:18:09.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.301 "adrfam": "ipv4", 00:18:09.301 "trsvcid": "$NVMF_PORT", 00:18:09.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.301 "hdgst": ${hdgst:-false}, 00:18:09.301 "ddgst": ${ddgst:-false} 00:18:09.301 }, 00:18:09.301 "method": "bdev_nvme_attach_controller" 00:18:09.301 } 00:18:09.301 EOF 00:18:09.301 )") 00:18:09.301 [2024-07-15 16:16:52.197729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.197811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:09.301 16:16:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:09.301 "params": { 00:18:09.301 "name": "Nvme1", 00:18:09.301 "trtype": "tcp", 00:18:09.301 "traddr": "10.0.0.2", 00:18:09.301 "adrfam": "ipv4", 00:18:09.301 "trsvcid": "4420", 00:18:09.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.301 "hdgst": false, 00:18:09.301 "ddgst": false 00:18:09.301 }, 00:18:09.301 "method": "bdev_nvme_attach_controller" 00:18:09.301 }' 00:18:09.301 [2024-07-15 16:16:52.205687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.205715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.213704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.213728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.221720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.221750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.229790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.229813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.236562] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:09.301 [2024-07-15 16:16:52.236621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315856 ] 00:18:09.301 [2024-07-15 16:16:52.237793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.237817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.245816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.245855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.253837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.253860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.261851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.261875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.301 [2024-07-15 16:16:52.269871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.269892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.301 [2024-07-15 16:16:52.277910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.301 [2024-07-15 16:16:52.277933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.559 [2024-07-15 16:16:52.285915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.559 [2024-07-15 16:16:52.285937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.559 [2024-07-15 16:16:52.293935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.559 [2024-07-15 16:16:52.293957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.559 [2024-07-15 16:16:52.301619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.559 [2024-07-15 16:16:52.301957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.559 [2024-07-15 16:16:52.301978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.310028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.310068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.318041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.318073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.326041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.326064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.334069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.334118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.342104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.342130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.350103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.350129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.358154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.358191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.366163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.366190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.374179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.374204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.382202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.382228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.390225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.390250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.395665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.560 [2024-07-15 16:16:52.398247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.398271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.406269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.406294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.414313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.414350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.422335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.422376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.430358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.430394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.438386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.438425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.446414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.446455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.454435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.454476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.462439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.462472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.470472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.470510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.478498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.478542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.486521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.486559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.494519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.494544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.502541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.502567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.510574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.510605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.518595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.518624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.526615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.526642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.560 [2024-07-15 16:16:52.534638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.560 [2024-07-15 16:16:52.534666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.542661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.542688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.550685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.550714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.558704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.558731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.566729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.566763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.574798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.574821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.582794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.582816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.590818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.590841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.598832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.598853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.606850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.606871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.614870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.614891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.622879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.622901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.630901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.818 [2024-07-15 16:16:52.630923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.818 [2024-07-15 16:16:52.638922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.638945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.646946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.646967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.654967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.654988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.662991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.663026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.671031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.671052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.679060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.679097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.687089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.687114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.695121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.695149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 Running I/O for 5 seconds... 00:18:09.819 [2024-07-15 16:16:52.703140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.703166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.717456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.717488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.729693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.729730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.743520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.743557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.754406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.754437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.766432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.766463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.778176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.778208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.819 [2024-07-15 16:16:52.792049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.819 [2024-07-15 16:16:52.792090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.803422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.803453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.814957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.814984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.826713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.826755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.838540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.838570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.850103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.850145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.862122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.862154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.873826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.873852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.887436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.887467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.898671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.898701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.910734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.910787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.922627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.922658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.934436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.934468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.945763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.945805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.957148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.957188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.968550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.968582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.980624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.980657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:52.992058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:52.992099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:53.003628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:53.003659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:53.015509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:53.015540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:53.027353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:53.027385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:53.038947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:53.038973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.077 [2024-07-15 16:16:53.050239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.077 [2024-07-15 16:16:53.050270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.062035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.062060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.073555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.073587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.086705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.086757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.097308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.097333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.107878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.107905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.120198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.120224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.130366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.130392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.140669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.140694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.151119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.151144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.161369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.161394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.171589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.171619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.182146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.182172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.192822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.192849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.206460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.206485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.216146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.216171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.227549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.227575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.238592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.335 [2024-07-15 16:16:53.238618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.335 [2024-07-15 16:16:53.249303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.249329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.336 [2024-07-15 16:16:53.261553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.261579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.336 [2024-07-15 16:16:53.271837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.271864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.336 [2024-07-15 16:16:53.282671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.282701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.336 [2024-07-15 16:16:53.294970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.294997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.336 [2024-07-15 16:16:53.305008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.336 [2024-07-15 16:16:53.305048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.315901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.315929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.326150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.326175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.336616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.336642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.349343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.349368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.358965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.358993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.369543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.369569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.379608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.379642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.389800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.389827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.400103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.400129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.410553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.410579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.423171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.423198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.433572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.433598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.443966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.593 [2024-07-15 16:16:53.443993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.593 [2024-07-15 16:16:53.456264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.456290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.466399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.466426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.476812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.476839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.487083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.487124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.497750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.497778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.507907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.507935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.518189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.518214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.528303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.528328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.538691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.538730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.548595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.548620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.558663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.558688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.594 [2024-07-15 16:16:53.568876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.594 [2024-07-15 16:16:53.568904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.579792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.579820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.590213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.590238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.600592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.600617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.613897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.613923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.623746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.623771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.634218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.634243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.644649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.644675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.654764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.654791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.665354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.665380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.675275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.675300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.685437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.685463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.695405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.695430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.705613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.705639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.716597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.716623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.727357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.727399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.739120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.739153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.750823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.750851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.764175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.764206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.774614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.774645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.785650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.785682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.796817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.796844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.808269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.808301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.819475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.819506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.852 [2024-07-15 16:16:53.830714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.852 [2024-07-15 16:16:53.830756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.841949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.841976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.853478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.853509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.864886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.864912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.876313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.876343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.888090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.888121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.899878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.899904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.911673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.911704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.923415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.923446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.934858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.934884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.946099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.946130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.957835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.957862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.969208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.969240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.980597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.980629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:53.992333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:53.992365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.003317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.003349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.014892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.014919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.026885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.026912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.038783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.038809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.050734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.050786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.062215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.062246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.073694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.073727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.110 [2024-07-15 16:16:54.084786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.110 [2024-07-15 16:16:54.084812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.096142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.096174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.108065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.108108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.119836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.119864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.131671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.131702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.143720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.143760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.155050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.155075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.166668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.166699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.178424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.178454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.189822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.189848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.201693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.201724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.213604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.213642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.225916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.225943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.237751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.237807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.249181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.249212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.261127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.261158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.272953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.272979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.284795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.284821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.296621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.296652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.307869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.307896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.319225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.319256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.330635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.330666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.369 [2024-07-15 16:16:54.341852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.369 [2024-07-15 16:16:54.341900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.627 [2024-07-15 16:16:54.353469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.627 [2024-07-15 16:16:54.353500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.627 [2024-07-15 16:16:54.364904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.627 [2024-07-15 16:16:54.364930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.627 [2024-07-15 16:16:54.376820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.627 [2024-07-15 16:16:54.376847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.627 [2024-07-15 16:16:54.388797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.388823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.400283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.400314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.411948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.411975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.423690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.423721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.435278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.435317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.446848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.446874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.457929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.457955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.469486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.469516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.480917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.480943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.494035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.494067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.504304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.504335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.516553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.516583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.527719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.527760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.540724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.540780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.551643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.551675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.563198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.563229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.575125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.575155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.586967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.586993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.628 [2024-07-15 16:16:54.598510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.628 [2024-07-15 16:16:54.598542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.610427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.610458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.624154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.624186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.634988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.635028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.646769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.646814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.658066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.658117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.669307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.669338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.681064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.681108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.692682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.692713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.705804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.705830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.717196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.717227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.729561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.729593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.741403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.741435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.751936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.751964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.763781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.763808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.773910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.773938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.784576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.784601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.796711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.796762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.806102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.806129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.817165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.817190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.829043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.829069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.839609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.839636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.849900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.849927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:11.887 [2024-07-15 16:16:54.862141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:11.887 [2024-07-15 16:16:54.862189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.872327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.872361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.883823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.883851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.894148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.894173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.904834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.904861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.917659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.917692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.927105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.927130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.938702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.938750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.948859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.948886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.959433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.959464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.969910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.969937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.980461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.980487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:54.990789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:54.990816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.001072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.001098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.013581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.013607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.023273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.023298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.034593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.034618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.046893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.046920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.056413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.056438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.067390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.067415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.077889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.077924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.088288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.088313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.101369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.101394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.111254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.111280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.146 [2024-07-15 16:16:55.121985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.146 [2024-07-15 16:16:55.122013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.132628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.132653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.142807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.142834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.153818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.153845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.164359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.164385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.174774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.174802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.185554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.185579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.196086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.196112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.208353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.208379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.218337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.218363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.228473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.228498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.239062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.239104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.249674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.249700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.260463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.260489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.271194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.271219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.283240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.283266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.293690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.293731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.304129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.304155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.316452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.316478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.325911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.325938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.336818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.336846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.349314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.349339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.359284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.359310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.369540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.369566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.406 [2024-07-15 16:16:55.380320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.406 [2024-07-15 16:16:55.380346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.391289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.391316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.402046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.402072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.415185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.415216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.425815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.425841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.438057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.438098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.449581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.449612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.462627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.462658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.472857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.472883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.484490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.484521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.496217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.496248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.507104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.507135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.518428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.518458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.531478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.531509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.542151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.542182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.554269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.554301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.565549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.565581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.577297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.577329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.588680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.588711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.600146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.600178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.611653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.611684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.623178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.623208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.667 [2024-07-15 16:16:55.634975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.667 [2024-07-15 16:16:55.635002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.926 [2024-07-15 16:16:55.648721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.926 [2024-07-15 16:16:55.648765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.926 [2024-07-15 16:16:55.659220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.926 [2024-07-15 16:16:55.659250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.670696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.670727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.682153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.682184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.693897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.693924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.705269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.705300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.719081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.719123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.730382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.730413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.741721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.741761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.755375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.755406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.766552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.766583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.778366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.778397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.789501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.789531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.800760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.800813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.812694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.812726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.826438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.826468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.837514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.837546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.848852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.848879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.862249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.862281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.873005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.873047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.884168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.884200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.927 [2024-07-15 16:16:55.897641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:12.927 [2024-07-15 16:16:55.897672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.908993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.909034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.920214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.920246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.932169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.932207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.943909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.943936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.955367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.955397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.968541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.968572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.979503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.979534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:55.990451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:55.990481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.001408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.001439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.013457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.013489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.025350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.025382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.036833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.036860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.048704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.048736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.060394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.060424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.072948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.072975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.084798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.084825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.096435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.096466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.108950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.108977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.120839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.120866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.132579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.132611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.144035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.144061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.187 [2024-07-15 16:16:56.155218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.187 [2024-07-15 16:16:56.155259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.167192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.167225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.178875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.178902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.190227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.190258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.201494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.201525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.212864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.212891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.224511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.224542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.236169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.236200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.248304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.248335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.260102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.260134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.272051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.446 [2024-07-15 16:16:56.272077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.446 [2024-07-15 16:16:56.283404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.283434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.294824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.294850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.306160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.306190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.317884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.317910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.329475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.329507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.340587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.340619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.351879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.351906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.363646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.363677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.374983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.375030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.388508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.388538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.399707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.399745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.410810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.410839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.447 [2024-07-15 16:16:56.422469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.447 [2024-07-15 16:16:56.422500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.433804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.433832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.446851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.446877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.456847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.456873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.469065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.469096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.480525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.480555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.491878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.491904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.502955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.502981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.514164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.514194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.527347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.527377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.537919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.537946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.549489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.549520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.560862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.560888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.572056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.572097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.584272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.584304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.595927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.595960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.607040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.607066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.620734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.620785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.631400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.631432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.641970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.641996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.653189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.653220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.664272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.664303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:13.705 [2024-07-15 16:16:56.675359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:13.705 [2024-07-15 16:16:56.675390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.686727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.686786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.697859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.697885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.711389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.711420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.721913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.721939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.732910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.732936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.744255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.744286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.757566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.757597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.768159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.768190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.779576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.779607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.793123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.793154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.803663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.803694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.814545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.814585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.826146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.826177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.837108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.837136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.847703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.847734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.860923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.860952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.871940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.030 [2024-07-15 16:16:56.871968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.030 [2024-07-15 16:16:56.883190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.883222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.031 [2024-07-15 16:16:56.894735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.894792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.031 [2024-07-15 16:16:56.906575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.906607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.031 [2024-07-15 16:16:56.918048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.918074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.031 [2024-07-15 16:16:56.931579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.931610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.031 [2024-07-15 16:16:56.942781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.031 [2024-07-15 16:16:56.942826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:56.954524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:56.954556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:56.965677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:56.965707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:56.977074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:56.977119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:56.988295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:56.988325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:56.999855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:56.999882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.011187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.011219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.024812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.024838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.035358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.035389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.046720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.046760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.058288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.058319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.069590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.069621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.080971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.080997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.092863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.092891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.104354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.104385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.116316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.116347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.128059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.128100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.139644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.139675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.150961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.150988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.162462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.162494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.174646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.174677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.186876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.186903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.198422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.198453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.209832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.209858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.221328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.221359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.232788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.232814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.244411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.244443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.256339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.256371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.267272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.267303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.330 [2024-07-15 16:16:57.278859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.330 [2024-07-15 16:16:57.278901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.290292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.290323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.302129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.302160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.313621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.313652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.325357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.325388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.337610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.337641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.349832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.349860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.361550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.361581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.373489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.373520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.385082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.385126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.396608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.396639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.407969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.407995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.419713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.419753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.431680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.431712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.443155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.443186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.455071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.455115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.466333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.466364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.479695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.479725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.490401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.490432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.502334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.502366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.513927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.513953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.525419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.525451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.539040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.539066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.550002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.550042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.590 [2024-07-15 16:16:57.561856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.590 [2024-07-15 16:16:57.561883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.574091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.574122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.585888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.585915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.597490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.597521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.609166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.609198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.620813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.620840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.632519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.632550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.643445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.643477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.654757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.654799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.666492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.666523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.677715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.677754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.690960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.690987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.700989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.701035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.712822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.712849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.722898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.722925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 00:18:14.851 Latency(us) 00:18:14.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.851 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:14.851 Nvme1n1 : 5.01 11294.48 88.24 0.00 0.00 11317.27 4757.43 23787.14 00:18:14.851 =================================================================================================================== 00:18:14.851 Total : 11294.48 88.24 0.00 0.00 11317.27 4757.43 23787.14 00:18:14.851 [2024-07-15 16:16:57.727593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.727621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.735613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.735643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.743635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.743668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.751680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.751728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.759703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.759756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.767726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.767784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.775758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.775807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.783774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.783825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.791792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.791842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.799822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.799870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.807844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.807893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.815864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.815914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.851 [2024-07-15 16:16:57.823881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:14.851 [2024-07-15 16:16:57.823948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.831896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.831944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.839916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.839966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.847942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.847990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.855969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.856027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.863962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.864004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.871966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.111 [2024-07-15 16:16:57.871991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.111 [2024-07-15 16:16:57.880028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.880082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.888050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.888096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.896091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.896142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.904068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.904107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.912091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.912122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.920140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.920186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.928159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.928202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.936162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.936190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.944176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.944201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 [2024-07-15 16:16:57.952198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:15.112 [2024-07-15 16:16:57.952224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (315856) - No such process 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 315856 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.112 delay0 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.112 16:16:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:15.112 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.112 [2024-07-15 16:16:58.074855] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:21.707 Initializing NVMe Controllers 00:18:21.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:21.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:21.707 Initialization complete. Launching workers. 00:18:21.707 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:18:21.707 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 33 00:18:21.707 success 243, unsuccess 160, failed 0 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.707 rmmod nvme_tcp 00:18:21.707 rmmod nvme_fabrics 00:18:21.707 rmmod nvme_keyring 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 314640 ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 314640 ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314640' 00:18:21.707 killing process with pid 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 314640 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.707 16:17:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.611 16:17:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.611 00:18:23.611 real 0m27.733s 00:18:23.611 user 0m39.864s 00:18:23.611 sys 0m9.355s 00:18:23.611 16:17:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:23.611 16:17:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:23.611 ************************************ 00:18:23.611 END TEST nvmf_zcopy 00:18:23.611 ************************************ 00:18:23.611 16:17:06 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:23.611 16:17:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:23.611 16:17:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:23.611 16:17:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.869 ************************************ 00:18:23.869 START TEST nvmf_nmic 00:18:23.869 ************************************ 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:23.869 * Looking for test storage... 00:18:23.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.869 16:17:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:25.767 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:25.767 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:25.767 Found net devices under 0000:84:00.0: cvl_0_0 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:25.767 Found net devices under 0000:84:00.1: cvl_0_1 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:25.767 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.768 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.024 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.024 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:26.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:18:26.025 00:18:26.025 --- 10.0.0.2 ping statistics --- 00:18:26.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.025 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:26.025 00:18:26.025 --- 10.0.0.1 ping statistics --- 00:18:26.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.025 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=319250 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 319250 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 319250 ']' 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:26.025 16:17:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.025 [2024-07-15 16:17:08.921299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:26.025 [2024-07-15 16:17:08.921379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.025 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.025 [2024-07-15 16:17:08.992416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.283 [2024-07-15 16:17:09.085998] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.283 [2024-07-15 16:17:09.086059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.283 [2024-07-15 16:17:09.086085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.283 [2024-07-15 16:17:09.086100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.283 [2024-07-15 16:17:09.086112] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.283 [2024-07-15 16:17:09.086205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.283 [2024-07-15 16:17:09.086263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.283 [2024-07-15 16:17:09.086379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.283 [2024-07-15 16:17:09.086382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.283 [2024-07-15 16:17:09.241601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.283 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 Malloc0 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 [2024-07-15 16:17:09.295273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:26.542 test case1: single bdev can't be used in multiple subsystems 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:26.542 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.543 [2024-07-15 16:17:09.319154] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:26.543 [2024-07-15 16:17:09.319186] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:26.543 [2024-07-15 16:17:09.319201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.543 request: 00:18:26.543 { 00:18:26.543 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:26.543 "namespace": { 00:18:26.543 "bdev_name": "Malloc0", 00:18:26.543 "no_auto_visible": false 00:18:26.543 }, 00:18:26.543 "method": "nvmf_subsystem_add_ns", 00:18:26.543 "req_id": 1 00:18:26.543 } 00:18:26.543 Got JSON-RPC error response 00:18:26.543 response: 00:18:26.543 { 00:18:26.543 "code": -32602, 00:18:26.543 "message": "Invalid parameters" 00:18:26.543 } 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:26.543 Adding namespace failed - expected result. 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:26.543 test case2: host connect to nvmf target in multiple paths 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:26.543 [2024-07-15 16:17:09.327253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.543 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:27.105 16:17:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:27.671 16:17:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:27.671 16:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:27.671 16:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.671 16:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:27.671 16:17:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.201 16:17:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:30.202 16:17:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:30.202 [global] 00:18:30.202 thread=1 00:18:30.202 invalidate=1 00:18:30.202 rw=write 00:18:30.202 time_based=1 00:18:30.202 runtime=1 00:18:30.202 ioengine=libaio 00:18:30.202 direct=1 00:18:30.202 bs=4096 00:18:30.202 iodepth=1 00:18:30.202 norandommap=0 00:18:30.202 numjobs=1 00:18:30.202 00:18:30.202 verify_dump=1 00:18:30.202 verify_backlog=512 00:18:30.202 verify_state_save=0 00:18:30.202 do_verify=1 00:18:30.202 verify=crc32c-intel 00:18:30.202 [job0] 00:18:30.202 filename=/dev/nvme0n1 00:18:30.202 Could not set queue depth (nvme0n1) 00:18:30.202 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:30.202 fio-3.35 00:18:30.202 Starting 1 thread 00:18:31.138 00:18:31.138 job0: (groupid=0, jobs=1): err= 0: pid=319776: Mon Jul 15 16:17:14 2024 00:18:31.138 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:18:31.138 slat (nsec): min=9552, max=44248, avg=19037.96, stdev=8971.92 00:18:31.138 clat (usec): min=40807, max=41495, avg=40990.10, stdev=122.29 00:18:31.138 lat (usec): min=40839, max=41504, avg=41009.14, stdev=118.21 00:18:31.138 clat percentiles (usec): 00:18:31.138 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:31.138 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:31.138 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:31.138 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:31.138 | 99.99th=[41681] 00:18:31.138 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:31.138 slat (nsec): min=7218, max=37193, avg=9661.60, stdev=2639.45 00:18:31.138 clat (usec): min=144, max=271, avg=170.65, stdev=16.79 00:18:31.138 lat (usec): min=153, max=300, avg=180.31, stdev=17.41 00:18:31.138 clat percentiles (usec): 00:18:31.138 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:18:31.138 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:18:31.138 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 198], 00:18:31.138 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 273], 99.95th=[ 273], 00:18:31.138 | 99.99th=[ 273] 00:18:31.138 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:31.138 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:31.138 lat (usec) : 250=95.33%, 500=0.37% 00:18:31.138 lat (msec) : 50=4.30% 00:18:31.138 cpu : usr=0.77%, sys=0.00%, ctx=535, majf=0, minf=2 00:18:31.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:31.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.138 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:31.138 00:18:31.138 Run status group 0 (all jobs): 00:18:31.138 READ: bw=88.7KiB/s (90.8kB/s), 88.7KiB/s-88.7KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1037-1037msec 00:18:31.138 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:18:31.138 00:18:31.138 Disk stats (read/write): 00:18:31.138 nvme0n1: ios=69/512, merge=0/0, ticks=808/89, in_queue=897, util=91.58% 00:18:31.138 16:17:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:31.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.396 rmmod nvme_tcp 00:18:31.396 rmmod nvme_fabrics 00:18:31.396 rmmod nvme_keyring 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 319250 ']' 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 319250 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 319250 ']' 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 319250 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 319250 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 319250' 00:18:31.396 killing process with pid 319250 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 319250 00:18:31.396 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 319250 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.655 16:17:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.190 16:17:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.190 00:18:34.190 real 0m9.959s 00:18:34.190 user 0m22.608s 00:18:34.190 sys 0m2.335s 00:18:34.190 16:17:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:34.190 16:17:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:34.190 ************************************ 00:18:34.190 END TEST nvmf_nmic 00:18:34.190 ************************************ 00:18:34.190 16:17:16 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.190 16:17:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:34.190 16:17:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.190 16:17:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.190 ************************************ 00:18:34.190 START TEST nvmf_fio_target 00:18:34.190 ************************************ 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:34.190 * Looking for test storage... 00:18:34.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:34.190 16:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.191 16:17:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:35.568 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:35.568 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:35.568 Found net devices under 0000:84:00.0: cvl_0_0 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:35.568 Found net devices under 0000:84:00.1: cvl_0_1 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.568 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.569 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.569 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:35.829 00:18:35.829 --- 10.0.0.2 ping statistics --- 00:18:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.829 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:18:35.829 00:18:35.829 --- 10.0.0.1 ping statistics --- 00:18:35.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.829 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=321972 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 321972 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 321972 ']' 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.829 16:17:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.829 [2024-07-15 16:17:18.758477] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:35.830 [2024-07-15 16:17:18.758561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.830 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.089 [2024-07-15 16:17:18.831883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.089 [2024-07-15 16:17:18.929809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.089 [2024-07-15 16:17:18.929878] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.089 [2024-07-15 16:17:18.929894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.089 [2024-07-15 16:17:18.929908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.089 [2024-07-15 16:17:18.929919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.089 [2024-07-15 16:17:18.932762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.089 [2024-07-15 16:17:18.932817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.089 [2024-07-15 16:17:18.932934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.089 [2024-07-15 16:17:18.932937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.089 16:17:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.089 16:17:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:36.089 16:17:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.089 16:17:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.089 16:17:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.347 16:17:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.347 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:36.604 [2024-07-15 16:17:19.348239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.604 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:36.862 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:36.862 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.120 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:37.120 16:17:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.378 16:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:37.378 16:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:37.636 16:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:37.636 16:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:37.894 16:17:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.151 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:38.152 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.409 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:38.409 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:38.667 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:38.667 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:38.925 16:17:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.182 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.182 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.440 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:39.440 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.698 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.955 [2024-07-15 16:17:22.746779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.955 16:17:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:40.213 16:17:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:40.470 16:17:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:41.039 16:17:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:42.991 16:17:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:43.249 [global] 00:18:43.249 thread=1 00:18:43.249 invalidate=1 00:18:43.249 rw=write 00:18:43.249 time_based=1 00:18:43.249 runtime=1 00:18:43.249 ioengine=libaio 00:18:43.249 direct=1 00:18:43.249 bs=4096 00:18:43.249 iodepth=1 00:18:43.249 norandommap=0 00:18:43.249 numjobs=1 00:18:43.249 00:18:43.249 verify_dump=1 00:18:43.249 verify_backlog=512 00:18:43.249 verify_state_save=0 00:18:43.249 do_verify=1 00:18:43.249 verify=crc32c-intel 00:18:43.249 [job0] 00:18:43.249 filename=/dev/nvme0n1 00:18:43.249 [job1] 00:18:43.249 filename=/dev/nvme0n2 00:18:43.249 [job2] 00:18:43.249 filename=/dev/nvme0n3 00:18:43.249 [job3] 00:18:43.249 filename=/dev/nvme0n4 00:18:43.249 Could not set queue depth (nvme0n1) 00:18:43.249 Could not set queue depth (nvme0n2) 00:18:43.249 Could not set queue depth (nvme0n3) 00:18:43.249 Could not set queue depth (nvme0n4) 00:18:43.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.249 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.249 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.249 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.249 fio-3.35 00:18:43.249 Starting 4 threads 00:18:44.624 00:18:44.624 job0: (groupid=0, jobs=1): err= 0: pid=322935: Mon Jul 15 16:17:27 2024 00:18:44.624 read: IOPS=27, BW=112KiB/s (115kB/s)(112KiB/1001msec) 00:18:44.624 slat (nsec): min=7004, max=42092, avg=14098.36, stdev=6391.45 00:18:44.624 clat (usec): min=295, max=41330, avg=31058.59, stdev=17526.19 00:18:44.624 lat (usec): min=307, max=41337, avg=31072.68, stdev=17527.81 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 297], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 400], 00:18:44.624 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:44.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:44.624 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:44.624 | 99.99th=[41157] 00:18:44.624 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:44.624 slat (nsec): min=7444, max=53570, avg=13524.84, stdev=7613.53 00:18:44.624 clat (usec): min=172, max=408, avg=238.93, stdev=39.80 00:18:44.624 lat (usec): min=182, max=427, avg=252.46, stdev=42.61 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 180], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 210], 00:18:44.624 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:18:44.624 | 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 326], 00:18:44.624 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 408], 99.95th=[ 408], 00:18:44.624 | 99.99th=[ 408] 00:18:44.624 bw ( KiB/s): min= 4096, max= 4096, per=16.37%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.624 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.624 lat (usec) : 250=70.74%, 500=25.19% 00:18:44.624 lat (msec) : 10=0.19%, 50=3.89% 00:18:44.624 cpu : usr=0.00%, sys=1.10%, ctx=540, majf=0, minf=2 00:18:44.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.624 job1: (groupid=0, jobs=1): err= 0: pid=322936: Mon Jul 15 16:17:27 2024 00:18:44.624 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:44.624 slat (nsec): min=6153, max=48710, avg=10426.64, stdev=5517.41 00:18:44.624 clat (usec): min=212, max=723, avg=328.06, stdev=71.50 00:18:44.624 lat (usec): min=219, max=757, avg=338.49, stdev=74.53 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 225], 5.00th=[ 243], 10.00th=[ 273], 20.00th=[ 289], 00:18:44.624 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:18:44.624 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 408], 95.00th=[ 490], 00:18:44.624 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 717], 99.95th=[ 725], 00:18:44.624 | 99.99th=[ 725] 00:18:44.624 write: IOPS=1898, BW=7592KiB/s (7775kB/s)(7600KiB/1001msec); 0 zone resets 00:18:44.624 slat (usec): min=8, max=14454, avg=20.99, stdev=331.37 00:18:44.624 clat (usec): min=145, max=585, avg=225.76, stdev=52.88 00:18:44.624 lat (usec): min=155, max=14818, avg=246.75, stdev=339.18 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 192], 00:18:44.624 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:18:44.624 | 70.00th=[ 229], 80.00th=[ 251], 90.00th=[ 297], 95.00th=[ 338], 00:18:44.624 | 99.00th=[ 412], 99.50th=[ 449], 99.90th=[ 553], 99.95th=[ 586], 00:18:44.624 | 99.99th=[ 586] 00:18:44.624 bw ( KiB/s): min= 8192, max= 8192, per=32.73%, avg=8192.00, stdev= 0.00, samples=1 00:18:44.624 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:44.624 lat (usec) : 250=47.12%, 500=50.70%, 750=2.18% 00:18:44.624 cpu : usr=1.90%, sys=6.20%, ctx=3439, majf=0, minf=1 00:18:44.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 issued rwts: total=1536,1900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.624 job2: (groupid=0, jobs=1): err= 0: pid=322937: Mon Jul 15 16:17:27 2024 00:18:44.624 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:44.624 slat (nsec): min=5063, max=77746, avg=14934.11, stdev=8114.65 00:18:44.624 clat (usec): min=219, max=864, avg=331.53, stdev=80.02 00:18:44.624 lat (usec): min=227, max=879, avg=346.46, stdev=83.70 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 273], 00:18:44.624 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 326], 00:18:44.624 | 70.00th=[ 343], 80.00th=[ 388], 90.00th=[ 453], 95.00th=[ 490], 00:18:44.624 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 791], 99.95th=[ 865], 00:18:44.624 | 99.99th=[ 865] 00:18:44.624 write: IOPS=2037, BW=8152KiB/s (8347kB/s)(8160KiB/1001msec); 0 zone resets 00:18:44.624 slat (nsec): min=7363, max=42692, avg=11296.55, stdev=5300.03 00:18:44.624 clat (usec): min=153, max=861, avg=211.86, stdev=56.45 00:18:44.624 lat (usec): min=161, max=880, avg=223.16, stdev=58.86 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:18:44.624 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 200], 60.00th=[ 212], 00:18:44.624 | 70.00th=[ 221], 80.00th=[ 237], 90.00th=[ 269], 95.00th=[ 310], 00:18:44.624 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 537], 99.95th=[ 766], 00:18:44.624 | 99.99th=[ 865] 00:18:44.624 bw ( KiB/s): min= 8192, max= 8192, per=32.73%, avg=8192.00, stdev= 0.00, samples=1 00:18:44.624 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:44.624 lat (usec) : 250=53.66%, 500=44.63%, 750=1.59%, 1000=0.11% 00:18:44.624 cpu : usr=2.60%, sys=4.70%, ctx=3576, majf=0, minf=1 00:18:44.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 issued rwts: total=1536,2040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.624 job3: (groupid=0, jobs=1): err= 0: pid=322938: Mon Jul 15 16:17:27 2024 00:18:44.624 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:44.624 slat (nsec): min=6858, max=56622, avg=10198.12, stdev=4956.95 00:18:44.624 clat (usec): min=233, max=708, avg=324.03, stdev=46.51 00:18:44.624 lat (usec): min=242, max=723, avg=334.23, stdev=48.20 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:18:44.624 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 322], 00:18:44.624 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 416], 00:18:44.624 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[ 627], 99.95th=[ 709], 00:18:44.624 | 99.99th=[ 709] 00:18:44.624 write: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec); 0 zone resets 00:18:44.624 slat (nsec): min=8881, max=64662, avg=15083.70, stdev=7517.12 00:18:44.624 clat (usec): min=163, max=1016, avg=247.26, stdev=56.30 00:18:44.624 lat (usec): min=174, max=1029, avg=262.34, stdev=58.66 00:18:44.624 clat percentiles (usec): 00:18:44.624 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 206], 00:18:44.624 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 241], 00:18:44.624 | 70.00th=[ 260], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 359], 00:18:44.624 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 709], 99.95th=[ 1020], 00:18:44.624 | 99.99th=[ 1020] 00:18:44.624 bw ( KiB/s): min= 8192, max= 8192, per=32.73%, avg=8192.00, stdev= 0.00, samples=1 00:18:44.624 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:44.624 lat (usec) : 250=35.61%, 500=63.97%, 750=0.39% 00:18:44.624 lat (msec) : 2=0.03% 00:18:44.624 cpu : usr=2.80%, sys=6.10%, ctx=3348, majf=0, minf=1 00:18:44.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.624 issued rwts: total=1536,1811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.624 00:18:44.624 Run status group 0 (all jobs): 00:18:44.624 READ: bw=18.1MiB/s (19.0MB/s), 112KiB/s-6138KiB/s (115kB/s-6285kB/s), io=18.1MiB (19.0MB), run=1001-1001msec 00:18:44.624 WRITE: bw=24.4MiB/s (25.6MB/s), 2046KiB/s-8152KiB/s (2095kB/s-8347kB/s), io=24.5MiB (25.7MB), run=1001-1001msec 00:18:44.624 00:18:44.624 Disk stats (read/write): 00:18:44.624 nvme0n1: ios=74/512, merge=0/0, ticks=738/124, in_queue=862, util=86.27% 00:18:44.624 nvme0n2: ios=1404/1536, merge=0/0, ticks=835/329, in_queue=1164, util=97.55% 00:18:44.624 nvme0n3: ios=1468/1536, merge=0/0, ticks=468/302, in_queue=770, util=88.87% 00:18:44.624 nvme0n4: ios=1315/1536, merge=0/0, ticks=1358/374, in_queue=1732, util=97.78% 00:18:44.624 16:17:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:44.624 [global] 00:18:44.624 thread=1 00:18:44.624 invalidate=1 00:18:44.624 rw=randwrite 00:18:44.624 time_based=1 00:18:44.624 runtime=1 00:18:44.624 ioengine=libaio 00:18:44.624 direct=1 00:18:44.624 bs=4096 00:18:44.624 iodepth=1 00:18:44.624 norandommap=0 00:18:44.624 numjobs=1 00:18:44.624 00:18:44.624 verify_dump=1 00:18:44.624 verify_backlog=512 00:18:44.624 verify_state_save=0 00:18:44.624 do_verify=1 00:18:44.624 verify=crc32c-intel 00:18:44.624 [job0] 00:18:44.624 filename=/dev/nvme0n1 00:18:44.624 [job1] 00:18:44.624 filename=/dev/nvme0n2 00:18:44.624 [job2] 00:18:44.624 filename=/dev/nvme0n3 00:18:44.624 [job3] 00:18:44.624 filename=/dev/nvme0n4 00:18:44.624 Could not set queue depth (nvme0n1) 00:18:44.624 Could not set queue depth (nvme0n2) 00:18:44.624 Could not set queue depth (nvme0n3) 00:18:44.624 Could not set queue depth (nvme0n4) 00:18:44.882 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.882 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.882 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.882 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.882 fio-3.35 00:18:44.882 Starting 4 threads 00:18:46.256 00:18:46.256 job0: (groupid=0, jobs=1): err= 0: pid=323228: Mon Jul 15 16:17:28 2024 00:18:46.256 read: IOPS=74, BW=298KiB/s (305kB/s)(304KiB/1020msec) 00:18:46.256 slat (nsec): min=5283, max=53641, avg=11011.09, stdev=6984.21 00:18:46.256 clat (usec): min=217, max=41917, avg=11929.13, stdev=18333.19 00:18:46.256 lat (usec): min=225, max=41931, avg=11940.14, stdev=18335.35 00:18:46.256 clat percentiles (usec): 00:18:46.256 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 281], 00:18:46.256 | 30.00th=[ 310], 40.00th=[ 363], 50.00th=[ 424], 60.00th=[ 469], 00:18:46.256 | 70.00th=[ 523], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:46.256 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:18:46.256 | 99.99th=[41681] 00:18:46.256 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:18:46.256 slat (nsec): min=7103, max=32233, avg=8758.89, stdev=2073.58 00:18:46.256 clat (usec): min=164, max=268, avg=206.25, stdev=19.30 00:18:46.256 lat (usec): min=172, max=286, avg=215.01, stdev=19.64 00:18:46.256 clat percentiles (usec): 00:18:46.256 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:18:46.256 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:18:46.256 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 239], 00:18:46.256 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 269], 99.95th=[ 269], 00:18:46.256 | 99.99th=[ 269] 00:18:46.256 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:18:46.256 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:46.256 lat (usec) : 250=87.76%, 500=7.99%, 750=0.51% 00:18:46.256 lat (msec) : 50=3.74% 00:18:46.256 cpu : usr=0.39%, sys=0.39%, ctx=591, majf=0, minf=1 00:18:46.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.256 issued rwts: total=76,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.256 job1: (groupid=0, jobs=1): err= 0: pid=323247: Mon Jul 15 16:17:28 2024 00:18:46.256 read: IOPS=1482, BW=5931KiB/s (6073kB/s)(6156KiB/1038msec) 00:18:46.256 slat (nsec): min=4969, max=41914, avg=9388.16, stdev=4257.10 00:18:46.256 clat (usec): min=221, max=40985, avg=397.20, stdev=1793.52 00:18:46.256 lat (usec): min=230, max=41003, avg=406.59, stdev=1793.82 00:18:46.256 clat percentiles (usec): 00:18:46.256 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:18:46.256 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 302], 00:18:46.256 | 70.00th=[ 334], 80.00th=[ 388], 90.00th=[ 449], 95.00th=[ 490], 00:18:46.256 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:18:46.256 | 99.99th=[41157] 00:18:46.256 write: IOPS=1973, BW=7892KiB/s (8082kB/s)(8192KiB/1038msec); 0 zone resets 00:18:46.256 slat (nsec): min=6657, max=41715, avg=8863.54, stdev=3387.17 00:18:46.256 clat (usec): min=126, max=530, avg=187.85, stdev=54.23 00:18:46.256 lat (usec): min=133, max=538, avg=196.72, stdev=55.60 00:18:46.256 clat percentiles (usec): 00:18:46.256 | 1.00th=[ 131], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 151], 00:18:46.256 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 180], 00:18:46.256 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 269], 95.00th=[ 310], 00:18:46.256 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 482], 99.95th=[ 482], 00:18:46.256 | 99.99th=[ 529] 00:18:46.256 bw ( KiB/s): min= 8192, max= 8192, per=59.31%, avg=8192.00, stdev= 0.00, samples=2 00:18:46.256 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:46.256 lat (usec) : 250=56.82%, 500=41.34%, 750=1.73%, 1000=0.03% 00:18:46.256 lat (msec) : 50=0.08% 00:18:46.256 cpu : usr=1.45%, sys=3.38%, ctx=3589, majf=0, minf=1 00:18:46.256 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.256 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.256 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.257 job2: (groupid=0, jobs=1): err= 0: pid=323280: Mon Jul 15 16:17:28 2024 00:18:46.257 read: IOPS=201, BW=805KiB/s (824kB/s)(820KiB/1019msec) 00:18:46.257 slat (nsec): min=6289, max=22493, avg=8549.21, stdev=2957.88 00:18:46.257 clat (usec): min=223, max=41308, avg=4276.24, stdev=12096.85 00:18:46.257 lat (usec): min=231, max=41316, avg=4284.79, stdev=12098.20 00:18:46.257 clat percentiles (usec): 00:18:46.257 | 1.00th=[ 231], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:18:46.257 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 322], 00:18:46.257 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 562], 95.00th=[41157], 00:18:46.257 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:46.257 | 99.99th=[41157] 00:18:46.257 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:18:46.257 slat (nsec): min=7914, max=40340, avg=10777.31, stdev=3863.24 00:18:46.257 clat (usec): min=159, max=523, avg=259.82, stdev=64.82 00:18:46.257 lat (usec): min=169, max=539, avg=270.60, stdev=66.33 00:18:46.257 clat percentiles (usec): 00:18:46.257 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 202], 00:18:46.257 | 30.00th=[ 227], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 260], 00:18:46.257 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 371], 95.00th=[ 404], 00:18:46.257 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 523], 99.95th=[ 523], 00:18:46.257 | 99.99th=[ 523] 00:18:46.257 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:18:46.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:46.257 lat (usec) : 250=35.29%, 500=61.65%, 750=0.28% 00:18:46.257 lat (msec) : 50=2.79% 00:18:46.257 cpu : usr=0.49%, sys=0.88%, ctx=718, majf=0, minf=1 00:18:46.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.257 issued rwts: total=205,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.257 job3: (groupid=0, jobs=1): err= 0: pid=323281: Mon Jul 15 16:17:28 2024 00:18:46.257 read: IOPS=142, BW=571KiB/s (585kB/s)(576KiB/1008msec) 00:18:46.257 slat (nsec): min=7137, max=22930, avg=9328.39, stdev=2272.93 00:18:46.257 clat (usec): min=219, max=41963, avg=6204.54, stdev=14431.54 00:18:46.257 lat (usec): min=228, max=41976, avg=6213.87, stdev=14433.01 00:18:46.257 clat percentiles (usec): 00:18:46.257 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:18:46.257 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:18:46.257 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[41157], 95.00th=[41157], 00:18:46.257 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:46.257 | 99.99th=[42206] 00:18:46.257 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:18:46.257 slat (nsec): min=9782, max=46449, avg=14100.80, stdev=6968.25 00:18:46.257 clat (usec): min=152, max=767, avg=202.44, stdev=37.50 00:18:46.257 lat (usec): min=170, max=779, avg=216.54, stdev=38.02 00:18:46.257 clat percentiles (usec): 00:18:46.257 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:18:46.257 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:18:46.257 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 241], 95.00th=[ 251], 00:18:46.257 | 99.00th=[ 297], 99.50th=[ 347], 99.90th=[ 766], 99.95th=[ 766], 00:18:46.257 | 99.99th=[ 766] 00:18:46.257 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:18:46.257 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:46.257 lat (usec) : 250=80.03%, 500=16.62%, 1000=0.15% 00:18:46.257 lat (msec) : 50=3.20% 00:18:46.257 cpu : usr=0.40%, sys=0.79%, ctx=658, majf=0, minf=2 00:18:46.257 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:46.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.257 issued rwts: total=144,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.257 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:46.257 00:18:46.257 Run status group 0 (all jobs): 00:18:46.257 READ: bw=7568KiB/s (7750kB/s), 298KiB/s-5931KiB/s (305kB/s-6073kB/s), io=7856KiB (8045kB), run=1008-1038msec 00:18:46.257 WRITE: bw=13.5MiB/s (14.1MB/s), 2008KiB/s-7892KiB/s (2056kB/s-8082kB/s), io=14.0MiB (14.7MB), run=1008-1038msec 00:18:46.257 00:18:46.257 Disk stats (read/write): 00:18:46.257 nvme0n1: ios=95/512, merge=0/0, ticks=1687/104, in_queue=1791, util=97.09% 00:18:46.257 nvme0n2: ios=1576/1640, merge=0/0, ticks=1194/314, in_queue=1508, util=97.05% 00:18:46.257 nvme0n3: ios=252/512, merge=0/0, ticks=1366/123, in_queue=1489, util=97.49% 00:18:46.257 nvme0n4: ios=163/512, merge=0/0, ticks=1673/102, in_queue=1775, util=97.47% 00:18:46.257 16:17:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:46.257 [global] 00:18:46.257 thread=1 00:18:46.257 invalidate=1 00:18:46.257 rw=write 00:18:46.257 time_based=1 00:18:46.257 runtime=1 00:18:46.257 ioengine=libaio 00:18:46.257 direct=1 00:18:46.257 bs=4096 00:18:46.257 iodepth=128 00:18:46.257 norandommap=0 00:18:46.257 numjobs=1 00:18:46.257 00:18:46.257 verify_dump=1 00:18:46.257 verify_backlog=512 00:18:46.257 verify_state_save=0 00:18:46.257 do_verify=1 00:18:46.257 verify=crc32c-intel 00:18:46.257 [job0] 00:18:46.257 filename=/dev/nvme0n1 00:18:46.257 [job1] 00:18:46.257 filename=/dev/nvme0n2 00:18:46.257 [job2] 00:18:46.257 filename=/dev/nvme0n3 00:18:46.257 [job3] 00:18:46.257 filename=/dev/nvme0n4 00:18:46.257 Could not set queue depth (nvme0n1) 00:18:46.257 Could not set queue depth (nvme0n2) 00:18:46.257 Could not set queue depth (nvme0n3) 00:18:46.257 Could not set queue depth (nvme0n4) 00:18:46.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.257 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.257 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.257 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.257 fio-3.35 00:18:46.257 Starting 4 threads 00:18:47.631 00:18:47.631 job0: (groupid=0, jobs=1): err= 0: pid=323505: Mon Jul 15 16:17:30 2024 00:18:47.631 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:18:47.631 slat (usec): min=3, max=9408, avg=116.21, stdev=656.62 00:18:47.631 clat (usec): min=6805, max=49974, avg=13553.85, stdev=4658.46 00:18:47.631 lat (usec): min=6813, max=50534, avg=13670.06, stdev=4736.72 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10814], 00:18:47.631 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:18:47.631 | 70.00th=[13435], 80.00th=[14484], 90.00th=[17433], 95.00th=[21103], 00:18:47.631 | 99.00th=[34866], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:18:47.631 | 99.99th=[50070] 00:18:47.631 write: IOPS=3884, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1004msec); 0 zone resets 00:18:47.631 slat (usec): min=5, max=17048, avg=141.11, stdev=787.24 00:18:47.631 clat (usec): min=564, max=107408, avg=20072.99, stdev=16973.42 00:18:47.631 lat (msec): min=5, max=107, avg=20.21, stdev=17.07 00:18:47.631 clat percentiles (msec): 00:18:47.631 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:18:47.631 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 18], 00:18:47.631 | 70.00th=[ 22], 80.00th=[ 25], 90.00th=[ 32], 95.00th=[ 54], 00:18:47.631 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:18:47.631 | 99.99th=[ 108] 00:18:47.631 bw ( KiB/s): min=13523, max=16680, per=21.51%, avg=15101.50, stdev=2232.34, samples=2 00:18:47.631 iops : min= 3380, max= 4170, avg=3775.00, stdev=558.61, samples=2 00:18:47.631 lat (usec) : 750=0.01% 00:18:47.631 lat (msec) : 10=8.39%, 20=70.82%, 50=17.89%, 100=2.39%, 250=0.49% 00:18:47.631 cpu : usr=4.39%, sys=6.68%, ctx=447, majf=0, minf=1 00:18:47.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.631 issued rwts: total=3584,3900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.631 job1: (groupid=0, jobs=1): err= 0: pid=323506: Mon Jul 15 16:17:30 2024 00:18:47.631 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:18:47.631 slat (usec): min=2, max=11279, avg=91.61, stdev=625.66 00:18:47.631 clat (usec): min=996, max=44900, avg=13076.62, stdev=5221.23 00:18:47.631 lat (usec): min=1007, max=44920, avg=13168.23, stdev=5263.21 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 1287], 5.00th=[ 6456], 10.00th=[ 8979], 20.00th=[10683], 00:18:47.631 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:18:47.631 | 70.00th=[13566], 80.00th=[15008], 90.00th=[20317], 95.00th=[23987], 00:18:47.631 | 99.00th=[29754], 99.50th=[36963], 99.90th=[44827], 99.95th=[44827], 00:18:47.631 | 99.99th=[44827] 00:18:47.631 write: IOPS=4643, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1002msec); 0 zone resets 00:18:47.631 slat (usec): min=3, max=9393, avg=96.77, stdev=600.60 00:18:47.631 clat (usec): min=334, max=46688, avg=14347.02, stdev=9280.12 00:18:47.631 lat (usec): min=360, max=46694, avg=14443.79, stdev=9339.63 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 1532], 5.00th=[ 3326], 10.00th=[ 5604], 20.00th=[ 7898], 00:18:47.631 | 30.00th=[ 9110], 40.00th=[10421], 50.00th=[11207], 60.00th=[12649], 00:18:47.631 | 70.00th=[15926], 80.00th=[21103], 90.00th=[28181], 95.00th=[34866], 00:18:47.631 | 99.00th=[43779], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:18:47.631 | 99.99th=[46924] 00:18:47.631 bw ( KiB/s): min=16384, max=20480, per=26.25%, avg=18432.00, stdev=2896.31, samples=2 00:18:47.631 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:18:47.631 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.14% 00:18:47.631 lat (msec) : 2=1.71%, 4=2.70%, 10=20.08%, 20=57.13%, 50=18.21% 00:18:47.631 cpu : usr=3.60%, sys=5.69%, ctx=432, majf=0, minf=1 00:18:47.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.631 issued rwts: total=4608,4653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.631 job2: (groupid=0, jobs=1): err= 0: pid=323507: Mon Jul 15 16:17:30 2024 00:18:47.631 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:18:47.631 slat (usec): min=2, max=13375, avg=110.96, stdev=581.35 00:18:47.631 clat (usec): min=6900, max=59383, avg=15578.67, stdev=8124.56 00:18:47.631 lat (usec): min=6906, max=59395, avg=15689.64, stdev=8125.87 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 7570], 5.00th=[10552], 10.00th=[11338], 20.00th=[12256], 00:18:47.631 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:18:47.631 | 70.00th=[14877], 80.00th=[15926], 90.00th=[19006], 95.00th=[24773], 00:18:47.631 | 99.00th=[58983], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:18:47.631 | 99.99th=[59507] 00:18:47.631 write: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1004msec); 0 zone resets 00:18:47.631 slat (usec): min=3, max=45213, avg=114.71, stdev=892.65 00:18:47.631 clat (usec): min=393, max=34018, avg=14044.30, stdev=3717.27 00:18:47.631 lat (usec): min=3606, max=55990, avg=14159.02, stdev=3778.86 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[11863], 20.00th=[12256], 00:18:47.631 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:18:47.631 | 70.00th=[14615], 80.00th=[15401], 90.00th=[18220], 95.00th=[22676], 00:18:47.631 | 99.00th=[28705], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:18:47.631 | 99.99th=[33817] 00:18:47.631 bw ( KiB/s): min=16384, max=18304, per=24.70%, avg=17344.00, stdev=1357.65, samples=2 00:18:47.631 iops : min= 4096, max= 4576, avg=4336.00, stdev=339.41, samples=2 00:18:47.631 lat (usec) : 500=0.01% 00:18:47.631 lat (msec) : 4=0.21%, 10=3.36%, 20=87.78%, 50=7.15%, 100=1.48% 00:18:47.631 cpu : usr=3.99%, sys=7.48%, ctx=550, majf=0, minf=1 00:18:47.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.631 issued rwts: total=4096,4464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.631 job3: (groupid=0, jobs=1): err= 0: pid=323508: Mon Jul 15 16:17:30 2024 00:18:47.631 read: IOPS=4518, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:18:47.631 slat (usec): min=3, max=11440, avg=110.68, stdev=674.69 00:18:47.631 clat (usec): min=695, max=28886, avg=14158.91, stdev=3507.34 00:18:47.631 lat (usec): min=1986, max=32494, avg=14269.60, stdev=3545.12 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 5473], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:18:47.631 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13566], 60.00th=[14091], 00:18:47.631 | 70.00th=[14615], 80.00th=[16450], 90.00th=[18744], 95.00th=[21890], 00:18:47.631 | 99.00th=[26346], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:18:47.631 | 99.99th=[28967] 00:18:47.631 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:18:47.631 slat (usec): min=4, max=10238, avg=99.76, stdev=605.19 00:18:47.631 clat (usec): min=1436, max=28810, avg=13660.59, stdev=3584.04 00:18:47.631 lat (usec): min=1456, max=28827, avg=13760.34, stdev=3635.80 00:18:47.631 clat percentiles (usec): 00:18:47.631 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11731], 00:18:47.631 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:18:47.631 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19530], 95.00th=[21103], 00:18:47.631 | 99.00th=[22152], 99.50th=[22152], 99.90th=[27657], 99.95th=[27919], 00:18:47.631 | 99.99th=[28705] 00:18:47.631 bw ( KiB/s): min=16384, max=20480, per=26.25%, avg=18432.00, stdev=2896.31, samples=2 00:18:47.631 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:18:47.631 lat (usec) : 750=0.01% 00:18:47.631 lat (msec) : 2=0.19%, 4=0.07%, 10=7.86%, 20=83.95%, 50=7.92% 00:18:47.631 cpu : usr=4.70%, sys=7.29%, ctx=412, majf=0, minf=1 00:18:47.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:47.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.631 issued rwts: total=4528,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.631 00:18:47.631 Run status group 0 (all jobs): 00:18:47.631 READ: bw=65.4MiB/s (68.6MB/s), 13.9MiB/s-18.0MiB/s (14.6MB/s-18.8MB/s), io=65.7MiB (68.9MB), run=1002-1004msec 00:18:47.631 WRITE: bw=68.6MiB/s (71.9MB/s), 15.2MiB/s-18.1MiB/s (15.9MB/s-19.0MB/s), io=68.8MiB (72.2MB), run=1002-1004msec 00:18:47.631 00:18:47.631 Disk stats (read/write): 00:18:47.631 nvme0n1: ios=2646/3072, merge=0/0, ticks=20634/32570, in_queue=53204, util=99.20% 00:18:47.631 nvme0n2: ios=4022/4096, merge=0/0, ticks=43419/51211, in_queue=94630, util=96.75% 00:18:47.631 nvme0n3: ios=3568/3584, merge=0/0, ticks=17820/17558, in_queue=35378, util=99.06% 00:18:47.631 nvme0n4: ios=3976/4096, merge=0/0, ticks=23532/23677, in_queue=47209, util=89.27% 00:18:47.631 16:17:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:47.631 [global] 00:18:47.631 thread=1 00:18:47.631 invalidate=1 00:18:47.631 rw=randwrite 00:18:47.631 time_based=1 00:18:47.631 runtime=1 00:18:47.631 ioengine=libaio 00:18:47.631 direct=1 00:18:47.631 bs=4096 00:18:47.631 iodepth=128 00:18:47.631 norandommap=0 00:18:47.631 numjobs=1 00:18:47.631 00:18:47.631 verify_dump=1 00:18:47.631 verify_backlog=512 00:18:47.631 verify_state_save=0 00:18:47.631 do_verify=1 00:18:47.631 verify=crc32c-intel 00:18:47.631 [job0] 00:18:47.631 filename=/dev/nvme0n1 00:18:47.631 [job1] 00:18:47.631 filename=/dev/nvme0n2 00:18:47.631 [job2] 00:18:47.631 filename=/dev/nvme0n3 00:18:47.631 [job3] 00:18:47.631 filename=/dev/nvme0n4 00:18:47.631 Could not set queue depth (nvme0n1) 00:18:47.632 Could not set queue depth (nvme0n2) 00:18:47.632 Could not set queue depth (nvme0n3) 00:18:47.632 Could not set queue depth (nvme0n4) 00:18:47.632 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.632 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.632 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.632 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:47.632 fio-3.35 00:18:47.632 Starting 4 threads 00:18:49.007 00:18:49.007 job0: (groupid=0, jobs=1): err= 0: pid=323745: Mon Jul 15 16:17:31 2024 00:18:49.007 read: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1006msec) 00:18:49.007 slat (usec): min=3, max=22932, avg=179.91, stdev=1160.17 00:18:49.007 clat (usec): min=2942, max=88058, avg=21682.63, stdev=14661.83 00:18:49.007 lat (usec): min=5815, max=88076, avg=21862.54, stdev=14781.80 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 8717], 5.00th=[11207], 10.00th=[11600], 20.00th=[12649], 00:18:49.007 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15926], 60.00th=[16909], 00:18:49.007 | 70.00th=[18744], 80.00th=[24249], 90.00th=[47449], 95.00th=[57410], 00:18:49.007 | 99.00th=[68682], 99.50th=[73925], 99.90th=[80217], 99.95th=[84411], 00:18:49.007 | 99.99th=[87557] 00:18:49.007 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:18:49.007 slat (usec): min=5, max=9296, avg=113.60, stdev=560.63 00:18:49.007 clat (usec): min=6172, max=73276, avg=16556.00, stdev=9914.32 00:18:49.007 lat (usec): min=6183, max=73285, avg=16669.60, stdev=9959.65 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 7963], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:18:49.007 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13566], 60.00th=[14222], 00:18:49.007 | 70.00th=[15008], 80.00th=[18220], 90.00th=[22152], 95.00th=[43254], 00:18:49.007 | 99.00th=[57934], 99.50th=[59507], 99.90th=[61080], 99.95th=[72877], 00:18:49.007 | 99.99th=[72877] 00:18:49.007 bw ( KiB/s): min=11936, max=16232, per=20.48%, avg=14084.00, stdev=3037.73, samples=2 00:18:49.007 iops : min= 2984, max= 4058, avg=3521.00, stdev=759.43, samples=2 00:18:49.007 lat (msec) : 4=0.01%, 10=2.71%, 20=75.85%, 50=15.56%, 100=5.86% 00:18:49.007 cpu : usr=4.18%, sys=5.57%, ctx=382, majf=0, minf=11 00:18:49.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:49.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.007 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.007 job1: (groupid=0, jobs=1): err= 0: pid=323746: Mon Jul 15 16:17:31 2024 00:18:49.007 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:18:49.007 slat (usec): min=3, max=12196, avg=111.15, stdev=732.84 00:18:49.007 clat (usec): min=4627, max=48525, avg=13311.49, stdev=5161.49 00:18:49.007 lat (usec): min=4633, max=48532, avg=13422.64, stdev=5224.22 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 4686], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[11207], 00:18:49.007 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12387], 60.00th=[12518], 00:18:49.007 | 70.00th=[12780], 80.00th=[13698], 90.00th=[19006], 95.00th=[23987], 00:18:49.007 | 99.00th=[35914], 99.50th=[40633], 99.90th=[48497], 99.95th=[48497], 00:18:49.007 | 99.99th=[48497] 00:18:49.007 write: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1011msec); 0 zone resets 00:18:49.007 slat (usec): min=3, max=10222, avg=133.97, stdev=598.36 00:18:49.007 clat (usec): min=621, max=52623, avg=19852.47, stdev=11053.59 00:18:49.007 lat (usec): min=627, max=52631, avg=19986.44, stdev=11127.41 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 2409], 5.00th=[ 6915], 10.00th=[ 8717], 20.00th=[10814], 00:18:49.007 | 30.00th=[11600], 40.00th=[12387], 50.00th=[14353], 60.00th=[21890], 00:18:49.007 | 70.00th=[26608], 80.00th=[31065], 90.00th=[36963], 95.00th=[40109], 00:18:49.007 | 99.00th=[45351], 99.50th=[47449], 99.90th=[52691], 99.95th=[52691], 00:18:49.007 | 99.99th=[52691] 00:18:49.007 bw ( KiB/s): min=11720, max=19632, per=22.79%, avg=15676.00, stdev=5594.63, samples=2 00:18:49.007 iops : min= 2930, max= 4908, avg=3919.00, stdev=1398.66, samples=2 00:18:49.007 lat (usec) : 750=0.07% 00:18:49.007 lat (msec) : 2=0.10%, 4=1.21%, 10=10.88%, 20=60.75%, 50=26.92% 00:18:49.007 lat (msec) : 100=0.08% 00:18:49.007 cpu : usr=3.47%, sys=5.64%, ctx=496, majf=0, minf=15 00:18:49.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:49.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.007 issued rwts: total=3584,4046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.007 job2: (groupid=0, jobs=1): err= 0: pid=323747: Mon Jul 15 16:17:31 2024 00:18:49.007 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:18:49.007 slat (usec): min=3, max=8561, avg=100.95, stdev=556.04 00:18:49.007 clat (usec): min=7546, max=28360, avg=12792.40, stdev=2491.94 00:18:49.007 lat (usec): min=7782, max=28375, avg=12893.35, stdev=2527.81 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11731], 00:18:49.007 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:18:49.007 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15139], 95.00th=[17171], 00:18:49.007 | 99.00th=[23725], 99.50th=[25560], 99.90th=[26084], 99.95th=[28443], 00:18:49.007 | 99.99th=[28443] 00:18:49.007 write: IOPS=5080, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:18:49.007 slat (usec): min=5, max=11998, avg=96.28, stdev=557.93 00:18:49.007 clat (usec): min=4989, max=33176, avg=13227.67, stdev=3071.51 00:18:49.007 lat (usec): min=5786, max=33193, avg=13323.95, stdev=3121.63 00:18:49.007 clat percentiles (usec): 00:18:49.007 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[11469], 20.00th=[11863], 00:18:49.007 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:18:49.007 | 70.00th=[12780], 80.00th=[12911], 90.00th=[17171], 95.00th=[21103], 00:18:49.007 | 99.00th=[23987], 99.50th=[25035], 99.90th=[28443], 99.95th=[30802], 00:18:49.007 | 99.99th=[33162] 00:18:49.007 bw ( KiB/s): min=19600, max=20272, per=28.99%, avg=19936.00, stdev=475.18, samples=2 00:18:49.007 iops : min= 4900, max= 5068, avg=4984.00, stdev=118.79, samples=2 00:18:49.007 lat (msec) : 10=7.40%, 20=87.17%, 50=5.43% 00:18:49.007 cpu : usr=5.47%, sys=8.76%, ctx=509, majf=0, minf=15 00:18:49.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:49.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.008 issued rwts: total=4608,5111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.008 job3: (groupid=0, jobs=1): err= 0: pid=323748: Mon Jul 15 16:17:31 2024 00:18:49.008 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:18:49.008 slat (usec): min=2, max=26256, avg=112.90, stdev=796.06 00:18:49.008 clat (usec): min=4989, max=59182, avg=14713.91, stdev=7586.78 00:18:49.008 lat (usec): min=4998, max=59189, avg=14826.82, stdev=7620.39 00:18:49.008 clat percentiles (usec): 00:18:49.008 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[10552], 20.00th=[11600], 00:18:49.008 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:18:49.008 | 70.00th=[13698], 80.00th=[14353], 90.00th=[20317], 95.00th=[27919], 00:18:49.008 | 99.00th=[50070], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:18:49.008 | 99.99th=[58983] 00:18:49.008 write: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1002msec); 0 zone resets 00:18:49.008 slat (usec): min=4, max=11354, avg=94.03, stdev=510.86 00:18:49.008 clat (usec): min=430, max=38990, avg=12716.13, stdev=2780.42 00:18:49.008 lat (usec): min=3285, max=39023, avg=12810.16, stdev=2791.15 00:18:49.008 clat percentiles (usec): 00:18:49.008 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11994], 00:18:49.008 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:18:49.008 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14222], 95.00th=[18482], 00:18:49.008 | 99.00th=[28181], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:18:49.008 | 99.99th=[39060] 00:18:49.008 bw ( KiB/s): min=16576, max=20288, per=26.80%, avg=18432.00, stdev=2624.78, samples=2 00:18:49.008 iops : min= 4144, max= 5072, avg=4608.00, stdev=656.20, samples=2 00:18:49.008 lat (usec) : 500=0.01% 00:18:49.008 lat (msec) : 4=0.35%, 10=7.79%, 20=85.85%, 50=5.66%, 100=0.34% 00:18:49.008 cpu : usr=5.19%, sys=7.39%, ctx=492, majf=0, minf=9 00:18:49.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:49.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.008 issued rwts: total=4608,4643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.008 00:18:49.008 Run status group 0 (all jobs): 00:18:49.008 READ: bw=61.6MiB/s (64.6MB/s), 12.2MiB/s-18.0MiB/s (12.8MB/s-18.8MB/s), io=62.3MiB (65.3MB), run=1002-1011msec 00:18:49.008 WRITE: bw=67.2MiB/s (70.4MB/s), 13.9MiB/s-19.8MiB/s (14.6MB/s-20.8MB/s), io=67.9MiB (71.2MB), run=1002-1011msec 00:18:49.008 00:18:49.008 Disk stats (read/write): 00:18:49.008 nvme0n1: ios=2615/2801, merge=0/0, ticks=18635/15425, in_queue=34060, util=97.90% 00:18:49.008 nvme0n2: ios=3111/3447, merge=0/0, ticks=33296/52825, in_queue=86121, util=97.33% 00:18:49.008 nvme0n3: ios=3630/4096, merge=0/0, ticks=23861/25021, in_queue=48882, util=97.08% 00:18:49.008 nvme0n4: ios=3584/3729, merge=0/0, ticks=25814/17328, in_queue=43142, util=89.13% 00:18:49.008 16:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:49.008 16:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=323883 00:18:49.008 16:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:49.008 16:17:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:49.008 [global] 00:18:49.008 thread=1 00:18:49.008 invalidate=1 00:18:49.008 rw=read 00:18:49.008 time_based=1 00:18:49.008 runtime=10 00:18:49.008 ioengine=libaio 00:18:49.008 direct=1 00:18:49.008 bs=4096 00:18:49.008 iodepth=1 00:18:49.008 norandommap=1 00:18:49.008 numjobs=1 00:18:49.008 00:18:49.008 [job0] 00:18:49.008 filename=/dev/nvme0n1 00:18:49.008 [job1] 00:18:49.008 filename=/dev/nvme0n2 00:18:49.008 [job2] 00:18:49.008 filename=/dev/nvme0n3 00:18:49.008 [job3] 00:18:49.008 filename=/dev/nvme0n4 00:18:49.008 Could not set queue depth (nvme0n1) 00:18:49.008 Could not set queue depth (nvme0n2) 00:18:49.008 Could not set queue depth (nvme0n3) 00:18:49.008 Could not set queue depth (nvme0n4) 00:18:49.265 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.265 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.265 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.265 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.265 fio-3.35 00:18:49.265 Starting 4 threads 00:18:52.547 16:17:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:52.547 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:52.547 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=757760, buflen=4096 00:18:52.547 fio: pid=323975, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.547 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.547 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:52.547 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:18:52.547 fio: pid=323974, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:52.805 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:52.805 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:52.805 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=40464384, buflen=4096 00:18:52.805 fio: pid=323972, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:53.063 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=37466112, buflen=4096 00:18:53.063 fio: pid=323973, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:53.063 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.063 16:17:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:53.063 00:18:53.063 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=323972: Mon Jul 15 16:17:36 2024 00:18:53.063 read: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(38.6MiB/3476msec) 00:18:53.063 slat (usec): min=4, max=33715, avg=17.23, stdev=368.42 00:18:53.063 clat (usec): min=196, max=41270, avg=329.47, stdev=1019.87 00:18:53.063 lat (usec): min=203, max=41284, avg=346.71, stdev=1084.98 00:18:53.063 clat percentiles (usec): 00:18:53.063 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:18:53.063 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 269], 60.00th=[ 285], 00:18:53.063 | 70.00th=[ 310], 80.00th=[ 367], 90.00th=[ 461], 95.00th=[ 519], 00:18:53.063 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 1942], 99.95th=[41157], 00:18:53.063 | 99.99th=[41157] 00:18:53.063 bw ( KiB/s): min= 9812, max=14520, per=58.86%, avg=12160.67, stdev=2078.38, samples=6 00:18:53.063 iops : min= 2453, max= 3630, avg=3040.17, stdev=519.59, samples=6 00:18:53.063 lat (usec) : 250=41.49%, 500=52.22%, 750=6.13%, 1000=0.03% 00:18:53.063 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.06% 00:18:53.063 cpu : usr=1.61%, sys=3.91%, ctx=9883, majf=0, minf=1 00:18:53.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.063 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.063 issued rwts: total=9880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.063 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=323973: Mon Jul 15 16:17:36 2024 00:18:53.063 read: IOPS=2449, BW=9796KiB/s (10.0MB/s)(35.7MiB/3735msec) 00:18:53.063 slat (usec): min=4, max=14737, avg=13.39, stdev=206.15 00:18:53.063 clat (usec): min=189, max=42003, avg=389.91, stdev=2325.89 00:18:53.063 lat (usec): min=196, max=42017, avg=403.30, stdev=2335.25 00:18:53.063 clat percentiles (usec): 00:18:53.063 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:18:53.063 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 249], 00:18:53.063 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 363], 00:18:53.063 | 99.00th=[ 490], 99.50th=[ 529], 99.90th=[41157], 99.95th=[41157], 00:18:53.063 | 99.99th=[42206] 00:18:53.063 bw ( KiB/s): min= 96, max=16146, per=45.71%, avg=9443.43, stdev=6874.28, samples=7 00:18:53.063 iops : min= 24, max= 4036, avg=2360.71, stdev=1718.45, samples=7 00:18:53.063 lat (usec) : 250=61.43%, 500=37.86%, 750=0.35%, 1000=0.01% 00:18:53.063 lat (msec) : 4=0.01%, 50=0.33% 00:18:53.063 cpu : usr=1.21%, sys=3.48%, ctx=9154, majf=0, minf=1 00:18:53.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.063 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.063 issued rwts: total=9148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.063 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=323974: Mon Jul 15 16:17:36 2024 00:18:53.064 read: IOPS=24, BW=97.3KiB/s (99.6kB/s)(312KiB/3207msec) 00:18:53.064 slat (usec): min=10, max=2898, avg=54.64, stdev=324.12 00:18:53.064 clat (usec): min=320, max=43999, avg=40769.37, stdev=4670.75 00:18:53.064 lat (usec): min=338, max=44019, avg=40824.54, stdev=4684.54 00:18:53.064 clat percentiles (usec): 00:18:53.064 | 1.00th=[ 322], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:53.064 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:53.064 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:53.064 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:53.064 | 99.99th=[43779] 00:18:53.064 bw ( KiB/s): min= 95, max= 104, per=0.47%, avg=97.17, stdev= 3.37, samples=6 00:18:53.064 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:18:53.064 lat (usec) : 500=1.27% 00:18:53.064 lat (msec) : 50=97.47% 00:18:53.064 cpu : usr=0.09%, sys=0.00%, ctx=81, majf=0, minf=1 00:18:53.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.064 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.064 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.064 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=323975: Mon Jul 15 16:17:36 2024 00:18:53.064 read: IOPS=63, BW=252KiB/s (258kB/s)(740KiB/2941msec) 00:18:53.064 slat (nsec): min=6841, max=33475, avg=13680.91, stdev=6287.45 00:18:53.064 clat (usec): min=284, max=42006, avg=15754.94, stdev=19730.49 00:18:53.064 lat (usec): min=293, max=42023, avg=15768.60, stdev=19733.30 00:18:53.064 clat percentiles (usec): 00:18:53.064 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 351], 00:18:53.064 | 30.00th=[ 388], 40.00th=[ 412], 50.00th=[ 474], 60.00th=[ 570], 00:18:53.064 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:53.064 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.064 | 99.99th=[42206] 00:18:53.064 bw ( KiB/s): min= 96, max= 560, per=1.32%, avg=273.40, stdev=233.16, samples=5 00:18:53.064 iops : min= 24, max= 140, avg=68.20, stdev=58.11, samples=5 00:18:53.064 lat (usec) : 500=53.76%, 750=7.53%, 1000=0.54% 00:18:53.064 lat (msec) : 50=37.63% 00:18:53.064 cpu : usr=0.17%, sys=0.00%, ctx=186, majf=0, minf=1 00:18:53.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.064 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.064 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.064 00:18:53.064 Run status group 0 (all jobs): 00:18:53.064 READ: bw=20.2MiB/s (21.2MB/s), 97.3KiB/s-11.1MiB/s (99.6kB/s-11.6MB/s), io=75.3MiB (79.0MB), run=2941-3735msec 00:18:53.064 00:18:53.064 Disk stats (read/write): 00:18:53.064 nvme0n1: ios=9876/0, merge=0/0, ticks=3039/0, in_queue=3039, util=94.59% 00:18:53.064 nvme0n2: ios=8666/0, merge=0/0, ticks=3385/0, in_queue=3385, util=95.63% 00:18:53.064 nvme0n3: ios=130/0, merge=0/0, ticks=4442/0, in_queue=4442, util=99.03% 00:18:53.064 nvme0n4: ios=183/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.75% 00:18:53.322 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.322 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:53.580 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.580 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:53.839 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:53.839 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:54.096 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:54.096 16:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:54.354 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:54.354 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 323883 00:18:54.354 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:54.354 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:54.612 nvmf hotplug test: fio failed as expected 00:18:54.612 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.871 rmmod nvme_tcp 00:18:54.871 rmmod nvme_fabrics 00:18:54.871 rmmod nvme_keyring 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 321972 ']' 00:18:54.871 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 321972 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 321972 ']' 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 321972 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 321972 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 321972' 00:18:54.872 killing process with pid 321972 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 321972 00:18:54.872 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 321972 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.131 16:17:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.042 16:17:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:57.042 00:18:57.042 real 0m23.385s 00:18:57.042 user 1m22.659s 00:18:57.042 sys 0m6.687s 00:18:57.042 16:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:57.042 16:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.042 ************************************ 00:18:57.042 END TEST nvmf_fio_target 00:18:57.042 ************************************ 00:18:57.042 16:17:40 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.042 16:17:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:57.042 16:17:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:57.042 16:17:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.301 ************************************ 00:18:57.301 START TEST nvmf_bdevio 00:18:57.301 ************************************ 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:57.301 * Looking for test storage... 00:18:57.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.301 16:17:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.206 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.207 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:59.207 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:59.207 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.207 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.207 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.207 16:17:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:59.207 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:59.207 Found net devices under 0000:84:00.0: cvl_0_0 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:59.207 Found net devices under 0000:84:00.1: cvl_0_1 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:18:59.207 00:18:59.207 --- 10.0.0.2 ping statistics --- 00:18:59.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.207 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:18:59.207 00:18:59.207 --- 10.0.0.1 ping statistics --- 00:18:59.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.207 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=326607 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 326607 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 326607 ']' 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:59.207 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.466 [2024-07-15 16:17:42.200612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:59.466 [2024-07-15 16:17:42.200694] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.466 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.466 [2024-07-15 16:17:42.270561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.466 [2024-07-15 16:17:42.362487] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.466 [2024-07-15 16:17:42.362550] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.466 [2024-07-15 16:17:42.362566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.466 [2024-07-15 16:17:42.362580] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.466 [2024-07-15 16:17:42.362592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.466 [2024-07-15 16:17:42.362675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:59.466 [2024-07-15 16:17:42.362732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:59.466 [2024-07-15 16:17:42.362784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:59.466 [2024-07-15 16:17:42.362787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 [2024-07-15 16:17:42.515378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 Malloc0 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:59.726 [2024-07-15 16:17:42.567050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:59.726 { 00:18:59.726 "params": { 00:18:59.726 "name": "Nvme$subsystem", 00:18:59.726 "trtype": "$TEST_TRANSPORT", 00:18:59.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:59.726 "adrfam": "ipv4", 00:18:59.726 "trsvcid": "$NVMF_PORT", 00:18:59.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:59.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:59.726 "hdgst": ${hdgst:-false}, 00:18:59.726 "ddgst": ${ddgst:-false} 00:18:59.726 }, 00:18:59.726 "method": "bdev_nvme_attach_controller" 00:18:59.726 } 00:18:59.726 EOF 00:18:59.726 )") 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:59.726 16:17:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:59.726 "params": { 00:18:59.726 "name": "Nvme1", 00:18:59.726 "trtype": "tcp", 00:18:59.726 "traddr": "10.0.0.2", 00:18:59.726 "adrfam": "ipv4", 00:18:59.726 "trsvcid": "4420", 00:18:59.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.726 "hdgst": false, 00:18:59.726 "ddgst": false 00:18:59.726 }, 00:18:59.726 "method": "bdev_nvme_attach_controller" 00:18:59.726 }' 00:18:59.726 [2024-07-15 16:17:42.612971] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:59.726 [2024-07-15 16:17:42.613040] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326752 ] 00:18:59.726 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.726 [2024-07-15 16:17:42.674844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:59.986 [2024-07-15 16:17:42.768974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.986 [2024-07-15 16:17:42.769026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.986 [2024-07-15 16:17:42.769029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.244 I/O targets: 00:19:00.244 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:00.244 00:19:00.244 00:19:00.244 CUnit - A unit testing framework for C - Version 2.1-3 00:19:00.244 http://cunit.sourceforge.net/ 00:19:00.244 00:19:00.244 00:19:00.244 Suite: bdevio tests on: Nvme1n1 00:19:00.244 Test: blockdev write read block ...passed 00:19:00.244 Test: blockdev write zeroes read block ...passed 00:19:00.244 Test: blockdev write zeroes read no split ...passed 00:19:00.244 Test: blockdev write zeroes read split ...passed 00:19:00.244 Test: blockdev write zeroes read split partial ...passed 00:19:00.244 Test: blockdev reset ...[2024-07-15 16:17:43.147942] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:00.244 [2024-07-15 16:17:43.148075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce37f0 (9): Bad file descriptor 00:19:00.244 [2024-07-15 16:17:43.199047] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.244 passed 00:19:00.244 Test: blockdev write read 8 blocks ...passed 00:19:00.244 Test: blockdev write read size > 128k ...passed 00:19:00.245 Test: blockdev write read invalid size ...passed 00:19:00.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:00.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:00.502 Test: blockdev write read max offset ...passed 00:19:00.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:00.502 Test: blockdev writev readv 8 blocks ...passed 00:19:00.502 Test: blockdev writev readv 30 x 1block ...passed 00:19:00.502 Test: blockdev writev readv block ...passed 00:19:00.502 Test: blockdev writev readv size > 128k ...passed 00:19:00.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:00.502 Test: blockdev comparev and writev ...[2024-07-15 16:17:43.372443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.372478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.372502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.372526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.372928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.372953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.372975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.372990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.373382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.373407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.373428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.373444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.373844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.373868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.373890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:00.502 [2024-07-15 16:17:43.373906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.502 passed 00:19:00.502 Test: blockdev nvme passthru rw ...passed 00:19:00.502 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:17:43.456150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.502 [2024-07-15 16:17:43.456179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.456332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.502 [2024-07-15 16:17:43.456355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.456501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.502 [2024-07-15 16:17:43.456524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.502 [2024-07-15 16:17:43.456673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:00.502 [2024-07-15 16:17:43.456695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.502 passed 00:19:00.502 Test: blockdev nvme admin passthru ...passed 00:19:00.760 Test: blockdev copy ...passed 00:19:00.760 00:19:00.760 Run Summary: Type Total Ran Passed Failed Inactive 00:19:00.760 suites 1 1 n/a 0 0 00:19:00.760 tests 23 23 23 0 0 00:19:00.760 asserts 152 152 152 0 n/a 00:19:00.760 00:19:00.760 Elapsed time = 1.040 seconds 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.760 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.760 rmmod nvme_tcp 00:19:00.760 rmmod nvme_fabrics 00:19:00.760 rmmod nvme_keyring 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 326607 ']' 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 326607 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 326607 ']' 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 326607 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326607 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326607' 00:19:01.019 killing process with pid 326607 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 326607 00:19:01.019 16:17:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 326607 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.279 16:17:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.184 16:17:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.184 00:19:03.184 real 0m6.070s 00:19:03.184 user 0m9.370s 00:19:03.184 sys 0m2.006s 00:19:03.184 16:17:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:03.184 16:17:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:03.184 ************************************ 00:19:03.184 END TEST nvmf_bdevio 00:19:03.184 ************************************ 00:19:03.184 16:17:46 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.184 16:17:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:03.184 16:17:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:03.184 16:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.184 ************************************ 00:19:03.184 START TEST nvmf_auth_target 00:19:03.184 ************************************ 00:19:03.184 16:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:03.441 * Looking for test storage... 00:19:03.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.441 16:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:05.343 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:05.344 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:05.344 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:05.344 Found net devices under 0000:84:00.0: cvl_0_0 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:05.344 Found net devices under 0000:84:00.1: cvl_0_1 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:05.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:05.344 00:19:05.344 --- 10.0.0.2 ping statistics --- 00:19:05.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.344 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:19:05.344 00:19:05.344 --- 10.0.0.1 ping statistics --- 00:19:05.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.344 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=328829 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 328829 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 328829 ']' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:05.344 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.602 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:05.602 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:05.602 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.602 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.602 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=328856 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b95e8c0f6d672bd4bc1e58158f3ddb458f970f2c253135ed 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oBl 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b95e8c0f6d672bd4bc1e58158f3ddb458f970f2c253135ed 0 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b95e8c0f6d672bd4bc1e58158f3ddb458f970f2c253135ed 0 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b95e8c0f6d672bd4bc1e58158f3ddb458f970f2c253135ed 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oBl 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oBl 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.oBl 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d7c1df365a3995ce988aec821a3bf6255aa569330847b36da73262d5ae0d494 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aVf 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d7c1df365a3995ce988aec821a3bf6255aa569330847b36da73262d5ae0d494 3 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d7c1df365a3995ce988aec821a3bf6255aa569330847b36da73262d5ae0d494 3 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d7c1df365a3995ce988aec821a3bf6255aa569330847b36da73262d5ae0d494 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aVf 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aVf 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.aVf 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a755ae4a75e84306f141ca08dc78f9f 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QeC 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a755ae4a75e84306f141ca08dc78f9f 1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a755ae4a75e84306f141ca08dc78f9f 1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a755ae4a75e84306f141ca08dc78f9f 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QeC 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QeC 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.QeC 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f95698974877cad2cc48764103ba90861aee1b75fd1e5790 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yVg 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f95698974877cad2cc48764103ba90861aee1b75fd1e5790 2 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f95698974877cad2cc48764103ba90861aee1b75fd1e5790 2 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f95698974877cad2cc48764103ba90861aee1b75fd1e5790 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yVg 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yVg 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yVg 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=918fa39c1bcfe1e0d75091eafc71321ccb1ce48023992534 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.TBI 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 918fa39c1bcfe1e0d75091eafc71321ccb1ce48023992534 2 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 918fa39c1bcfe1e0d75091eafc71321ccb1ce48023992534 2 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:05.861 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:05.862 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=918fa39c1bcfe1e0d75091eafc71321ccb1ce48023992534 00:19:05.862 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:05.862 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.TBI 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.TBI 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.TBI 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=00242efc2e34349ac3b73876e7e486d9 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BRj 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 00242efc2e34349ac3b73876e7e486d9 1 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 00242efc2e34349ac3b73876e7e486d9 1 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.119 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=00242efc2e34349ac3b73876e7e486d9 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BRj 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BRj 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BRj 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3ad139b9fb895808caf0eaf8b11f8a3e9ee23d30038a03590e7ff67a7daee488 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rVv 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3ad139b9fb895808caf0eaf8b11f8a3e9ee23d30038a03590e7ff67a7daee488 3 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3ad139b9fb895808caf0eaf8b11f8a3e9ee23d30038a03590e7ff67a7daee488 3 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3ad139b9fb895808caf0eaf8b11f8a3e9ee23d30038a03590e7ff67a7daee488 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rVv 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rVv 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.rVv 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 328829 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 328829 ']' 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.120 16:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 328856 /var/tmp/host.sock 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 328856 ']' 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:06.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.376 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oBl 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oBl 00:19:06.652 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oBl 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.aVf ]] 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aVf 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aVf 00:19:06.944 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aVf 00:19:07.204 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:07.204 16:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QeC 00:19:07.204 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.204 16:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.204 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.204 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QeC 00:19:07.204 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QeC 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yVg ]] 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yVg 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yVg 00:19:07.462 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yVg 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.TBI 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.TBI 00:19:07.718 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.TBI 00:19:07.975 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BRj ]] 00:19:07.975 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BRj 00:19:07.975 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.975 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.975 16:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.976 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BRj 00:19:07.976 16:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BRj 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rVv 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rVv 00:19:08.233 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rVv 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.490 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.747 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.006 00:19:09.006 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.006 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.006 16:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.264 { 00:19:09.264 "cntlid": 1, 00:19:09.264 "qid": 0, 00:19:09.264 "state": "enabled", 00:19:09.264 "listen_address": { 00:19:09.264 "trtype": "TCP", 00:19:09.264 "adrfam": "IPv4", 00:19:09.264 "traddr": "10.0.0.2", 00:19:09.264 "trsvcid": "4420" 00:19:09.264 }, 00:19:09.264 "peer_address": { 00:19:09.264 "trtype": "TCP", 00:19:09.264 "adrfam": "IPv4", 00:19:09.264 "traddr": "10.0.0.1", 00:19:09.264 "trsvcid": "53672" 00:19:09.264 }, 00:19:09.264 "auth": { 00:19:09.264 "state": "completed", 00:19:09.264 "digest": "sha256", 00:19:09.264 "dhgroup": "null" 00:19:09.264 } 00:19:09.264 } 00:19:09.264 ]' 00:19:09.264 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.522 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.780 16:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.720 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.978 16:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.235 00:19:11.235 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.235 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.235 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.492 { 00:19:11.492 "cntlid": 3, 00:19:11.492 "qid": 0, 00:19:11.492 "state": "enabled", 00:19:11.492 "listen_address": { 00:19:11.492 "trtype": "TCP", 00:19:11.492 "adrfam": "IPv4", 00:19:11.492 "traddr": "10.0.0.2", 00:19:11.492 "trsvcid": "4420" 00:19:11.492 }, 00:19:11.492 "peer_address": { 00:19:11.492 "trtype": "TCP", 00:19:11.492 "adrfam": "IPv4", 00:19:11.492 "traddr": "10.0.0.1", 00:19:11.492 "trsvcid": "53712" 00:19:11.492 }, 00:19:11.492 "auth": { 00:19:11.492 "state": "completed", 00:19:11.492 "digest": "sha256", 00:19:11.492 "dhgroup": "null" 00:19:11.492 } 00:19:11.492 } 00:19:11.492 ]' 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.492 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.750 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.750 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.750 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.008 16:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:12.941 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.199 16:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.456 00:19:13.456 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.456 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.456 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.713 { 00:19:13.713 "cntlid": 5, 00:19:13.713 "qid": 0, 00:19:13.713 "state": "enabled", 00:19:13.713 "listen_address": { 00:19:13.713 "trtype": "TCP", 00:19:13.713 "adrfam": "IPv4", 00:19:13.713 "traddr": "10.0.0.2", 00:19:13.713 "trsvcid": "4420" 00:19:13.713 }, 00:19:13.713 "peer_address": { 00:19:13.713 "trtype": "TCP", 00:19:13.713 "adrfam": "IPv4", 00:19:13.713 "traddr": "10.0.0.1", 00:19:13.713 "trsvcid": "53740" 00:19:13.713 }, 00:19:13.713 "auth": { 00:19:13.713 "state": "completed", 00:19:13.713 "digest": "sha256", 00:19:13.713 "dhgroup": "null" 00:19:13.713 } 00:19:13.713 } 00:19:13.713 ]' 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.713 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.281 16:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.215 16:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.473 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.730 00:19:15.730 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.730 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.730 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.988 { 00:19:15.988 "cntlid": 7, 00:19:15.988 "qid": 0, 00:19:15.988 "state": "enabled", 00:19:15.988 "listen_address": { 00:19:15.988 "trtype": "TCP", 00:19:15.988 "adrfam": "IPv4", 00:19:15.988 "traddr": "10.0.0.2", 00:19:15.988 "trsvcid": "4420" 00:19:15.988 }, 00:19:15.988 "peer_address": { 00:19:15.988 "trtype": "TCP", 00:19:15.988 "adrfam": "IPv4", 00:19:15.988 "traddr": "10.0.0.1", 00:19:15.988 "trsvcid": "53760" 00:19:15.988 }, 00:19:15.988 "auth": { 00:19:15.988 "state": "completed", 00:19:15.988 "digest": "sha256", 00:19:15.988 "dhgroup": "null" 00:19:15.988 } 00:19:15.988 } 00:19:15.988 ]' 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.988 16:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.246 16:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.182 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.440 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.008 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.008 { 00:19:18.008 "cntlid": 9, 00:19:18.008 "qid": 0, 00:19:18.008 "state": "enabled", 00:19:18.008 "listen_address": { 00:19:18.008 "trtype": "TCP", 00:19:18.008 "adrfam": "IPv4", 00:19:18.008 "traddr": "10.0.0.2", 00:19:18.008 "trsvcid": "4420" 00:19:18.008 }, 00:19:18.008 "peer_address": { 00:19:18.008 "trtype": "TCP", 00:19:18.008 "adrfam": "IPv4", 00:19:18.008 "traddr": "10.0.0.1", 00:19:18.008 "trsvcid": "36264" 00:19:18.008 }, 00:19:18.008 "auth": { 00:19:18.008 "state": "completed", 00:19:18.008 "digest": "sha256", 00:19:18.008 "dhgroup": "ffdhe2048" 00:19:18.008 } 00:19:18.008 } 00:19:18.008 ]' 00:19:18.008 16:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.266 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.524 16:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.460 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.717 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.718 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.718 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.718 16:18:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.718 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.718 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.975 00:19:19.975 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.975 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.975 16:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.233 { 00:19:20.233 "cntlid": 11, 00:19:20.233 "qid": 0, 00:19:20.233 "state": "enabled", 00:19:20.233 "listen_address": { 00:19:20.233 "trtype": "TCP", 00:19:20.233 "adrfam": "IPv4", 00:19:20.233 "traddr": "10.0.0.2", 00:19:20.233 "trsvcid": "4420" 00:19:20.233 }, 00:19:20.233 "peer_address": { 00:19:20.233 "trtype": "TCP", 00:19:20.233 "adrfam": "IPv4", 00:19:20.233 "traddr": "10.0.0.1", 00:19:20.233 "trsvcid": "36286" 00:19:20.233 }, 00:19:20.233 "auth": { 00:19:20.233 "state": "completed", 00:19:20.233 "digest": "sha256", 00:19:20.233 "dhgroup": "ffdhe2048" 00:19:20.233 } 00:19:20.233 } 00:19:20.233 ]' 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.233 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.491 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.748 16:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.687 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.945 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.946 16:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.204 00:19:22.204 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.204 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.204 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.463 { 00:19:22.463 "cntlid": 13, 00:19:22.463 "qid": 0, 00:19:22.463 "state": "enabled", 00:19:22.463 "listen_address": { 00:19:22.463 "trtype": "TCP", 00:19:22.463 "adrfam": "IPv4", 00:19:22.463 "traddr": "10.0.0.2", 00:19:22.463 "trsvcid": "4420" 00:19:22.463 }, 00:19:22.463 "peer_address": { 00:19:22.463 "trtype": "TCP", 00:19:22.463 "adrfam": "IPv4", 00:19:22.463 "traddr": "10.0.0.1", 00:19:22.463 "trsvcid": "36310" 00:19:22.463 }, 00:19:22.463 "auth": { 00:19:22.463 "state": "completed", 00:19:22.463 "digest": "sha256", 00:19:22.463 "dhgroup": "ffdhe2048" 00:19:22.463 } 00:19:22.463 } 00:19:22.463 ]' 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.463 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.721 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.721 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.721 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.721 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.721 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.979 16:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:23.914 16:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.172 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.431 00:19:24.689 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.689 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.689 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.947 { 00:19:24.947 "cntlid": 15, 00:19:24.947 "qid": 0, 00:19:24.947 "state": "enabled", 00:19:24.947 "listen_address": { 00:19:24.947 "trtype": "TCP", 00:19:24.947 "adrfam": "IPv4", 00:19:24.947 "traddr": "10.0.0.2", 00:19:24.947 "trsvcid": "4420" 00:19:24.947 }, 00:19:24.947 "peer_address": { 00:19:24.947 "trtype": "TCP", 00:19:24.947 "adrfam": "IPv4", 00:19:24.947 "traddr": "10.0.0.1", 00:19:24.947 "trsvcid": "36334" 00:19:24.947 }, 00:19:24.947 "auth": { 00:19:24.947 "state": "completed", 00:19:24.947 "digest": "sha256", 00:19:24.947 "dhgroup": "ffdhe2048" 00:19:24.947 } 00:19:24.947 } 00:19:24.947 ]' 00:19:24.947 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.948 16:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.207 16:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:19:26.157 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.161 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.162 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.162 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.419 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.987 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.987 { 00:19:26.987 "cntlid": 17, 00:19:26.987 "qid": 0, 00:19:26.987 "state": "enabled", 00:19:26.987 "listen_address": { 00:19:26.987 "trtype": "TCP", 00:19:26.987 "adrfam": "IPv4", 00:19:26.987 "traddr": "10.0.0.2", 00:19:26.987 "trsvcid": "4420" 00:19:26.987 }, 00:19:26.987 "peer_address": { 00:19:26.987 "trtype": "TCP", 00:19:26.987 "adrfam": "IPv4", 00:19:26.987 "traddr": "10.0.0.1", 00:19:26.987 "trsvcid": "36360" 00:19:26.987 }, 00:19:26.987 "auth": { 00:19:26.987 "state": "completed", 00:19:26.987 "digest": "sha256", 00:19:26.987 "dhgroup": "ffdhe3072" 00:19:26.987 } 00:19:26.987 } 00:19:26.987 ]' 00:19:26.987 16:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.244 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.505 16:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.477 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.735 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.993 00:19:29.252 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.252 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.252 16:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.252 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.252 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.252 16:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.252 16:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.511 { 00:19:29.511 "cntlid": 19, 00:19:29.511 "qid": 0, 00:19:29.511 "state": "enabled", 00:19:29.511 "listen_address": { 00:19:29.511 "trtype": "TCP", 00:19:29.511 "adrfam": "IPv4", 00:19:29.511 "traddr": "10.0.0.2", 00:19:29.511 "trsvcid": "4420" 00:19:29.511 }, 00:19:29.511 "peer_address": { 00:19:29.511 "trtype": "TCP", 00:19:29.511 "adrfam": "IPv4", 00:19:29.511 "traddr": "10.0.0.1", 00:19:29.511 "trsvcid": "45766" 00:19:29.511 }, 00:19:29.511 "auth": { 00:19:29.511 "state": "completed", 00:19:29.511 "digest": "sha256", 00:19:29.511 "dhgroup": "ffdhe3072" 00:19:29.511 } 00:19:29.511 } 00:19:29.511 ]' 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.511 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.769 16:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:30.701 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.702 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.959 16:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.218 00:19:31.476 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.476 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.476 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.476 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.745 { 00:19:31.745 "cntlid": 21, 00:19:31.745 "qid": 0, 00:19:31.745 "state": "enabled", 00:19:31.745 "listen_address": { 00:19:31.745 "trtype": "TCP", 00:19:31.745 "adrfam": "IPv4", 00:19:31.745 "traddr": "10.0.0.2", 00:19:31.745 "trsvcid": "4420" 00:19:31.745 }, 00:19:31.745 "peer_address": { 00:19:31.745 "trtype": "TCP", 00:19:31.745 "adrfam": "IPv4", 00:19:31.745 "traddr": "10.0.0.1", 00:19:31.745 "trsvcid": "45788" 00:19:31.745 }, 00:19:31.745 "auth": { 00:19:31.745 "state": "completed", 00:19:31.745 "digest": "sha256", 00:19:31.745 "dhgroup": "ffdhe3072" 00:19:31.745 } 00:19:31.745 } 00:19:31.745 ]' 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.745 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.002 16:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:32.936 16:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.194 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.450 00:19:33.450 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.450 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.450 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.707 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.707 { 00:19:33.707 "cntlid": 23, 00:19:33.707 "qid": 0, 00:19:33.707 "state": "enabled", 00:19:33.707 "listen_address": { 00:19:33.707 "trtype": "TCP", 00:19:33.707 "adrfam": "IPv4", 00:19:33.707 "traddr": "10.0.0.2", 00:19:33.707 "trsvcid": "4420" 00:19:33.707 }, 00:19:33.707 "peer_address": { 00:19:33.707 "trtype": "TCP", 00:19:33.707 "adrfam": "IPv4", 00:19:33.707 "traddr": "10.0.0.1", 00:19:33.707 "trsvcid": "45822" 00:19:33.707 }, 00:19:33.707 "auth": { 00:19:33.707 "state": "completed", 00:19:33.707 "digest": "sha256", 00:19:33.708 "dhgroup": "ffdhe3072" 00:19:33.708 } 00:19:33.708 } 00:19:33.708 ]' 00:19:33.708 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.708 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.708 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.964 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.964 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.964 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.964 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.964 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.222 16:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:19:35.153 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.153 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.154 16:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.411 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.412 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.412 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.412 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.670 00:19:35.670 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.670 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.670 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.928 { 00:19:35.928 "cntlid": 25, 00:19:35.928 "qid": 0, 00:19:35.928 "state": "enabled", 00:19:35.928 "listen_address": { 00:19:35.928 "trtype": "TCP", 00:19:35.928 "adrfam": "IPv4", 00:19:35.928 "traddr": "10.0.0.2", 00:19:35.928 "trsvcid": "4420" 00:19:35.928 }, 00:19:35.928 "peer_address": { 00:19:35.928 "trtype": "TCP", 00:19:35.928 "adrfam": "IPv4", 00:19:35.928 "traddr": "10.0.0.1", 00:19:35.928 "trsvcid": "45844" 00:19:35.928 }, 00:19:35.928 "auth": { 00:19:35.928 "state": "completed", 00:19:35.928 "digest": "sha256", 00:19:35.928 "dhgroup": "ffdhe4096" 00:19:35.928 } 00:19:35.928 } 00:19:35.928 ]' 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.928 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.186 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.186 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.186 16:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.444 16:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.382 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.641 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.899 00:19:37.899 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.899 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.899 16:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.156 { 00:19:38.156 "cntlid": 27, 00:19:38.156 "qid": 0, 00:19:38.156 "state": "enabled", 00:19:38.156 "listen_address": { 00:19:38.156 "trtype": "TCP", 00:19:38.156 "adrfam": "IPv4", 00:19:38.156 "traddr": "10.0.0.2", 00:19:38.156 "trsvcid": "4420" 00:19:38.156 }, 00:19:38.156 "peer_address": { 00:19:38.156 "trtype": "TCP", 00:19:38.156 "adrfam": "IPv4", 00:19:38.156 "traddr": "10.0.0.1", 00:19:38.156 "trsvcid": "56570" 00:19:38.156 }, 00:19:38.156 "auth": { 00:19:38.156 "state": "completed", 00:19:38.156 "digest": "sha256", 00:19:38.156 "dhgroup": "ffdhe4096" 00:19:38.156 } 00:19:38.156 } 00:19:38.156 ]' 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.156 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.415 16:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.792 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.050 00:19:40.050 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.050 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.050 16:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.307 { 00:19:40.307 "cntlid": 29, 00:19:40.307 "qid": 0, 00:19:40.307 "state": "enabled", 00:19:40.307 "listen_address": { 00:19:40.307 "trtype": "TCP", 00:19:40.307 "adrfam": "IPv4", 00:19:40.307 "traddr": "10.0.0.2", 00:19:40.307 "trsvcid": "4420" 00:19:40.307 }, 00:19:40.307 "peer_address": { 00:19:40.307 "trtype": "TCP", 00:19:40.307 "adrfam": "IPv4", 00:19:40.307 "traddr": "10.0.0.1", 00:19:40.307 "trsvcid": "56584" 00:19:40.307 }, 00:19:40.307 "auth": { 00:19:40.307 "state": "completed", 00:19:40.307 "digest": "sha256", 00:19:40.307 "dhgroup": "ffdhe4096" 00:19:40.307 } 00:19:40.307 } 00:19:40.307 ]' 00:19:40.307 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.565 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.822 16:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:41.759 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.017 16:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.274 00:19:42.274 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.274 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.274 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.531 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.532 { 00:19:42.532 "cntlid": 31, 00:19:42.532 "qid": 0, 00:19:42.532 "state": "enabled", 00:19:42.532 "listen_address": { 00:19:42.532 "trtype": "TCP", 00:19:42.532 "adrfam": "IPv4", 00:19:42.532 "traddr": "10.0.0.2", 00:19:42.532 "trsvcid": "4420" 00:19:42.532 }, 00:19:42.532 "peer_address": { 00:19:42.532 "trtype": "TCP", 00:19:42.532 "adrfam": "IPv4", 00:19:42.532 "traddr": "10.0.0.1", 00:19:42.532 "trsvcid": "56624" 00:19:42.532 }, 00:19:42.532 "auth": { 00:19:42.532 "state": "completed", 00:19:42.532 "digest": "sha256", 00:19:42.532 "dhgroup": "ffdhe4096" 00:19:42.532 } 00:19:42.532 } 00:19:42.532 ]' 00:19:42.532 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.789 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.047 16:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:43.986 16:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.244 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.832 00:19:44.832 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.832 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.832 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.089 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.089 { 00:19:45.089 "cntlid": 33, 00:19:45.089 "qid": 0, 00:19:45.090 "state": "enabled", 00:19:45.090 "listen_address": { 00:19:45.090 "trtype": "TCP", 00:19:45.090 "adrfam": "IPv4", 00:19:45.090 "traddr": "10.0.0.2", 00:19:45.090 "trsvcid": "4420" 00:19:45.090 }, 00:19:45.090 "peer_address": { 00:19:45.090 "trtype": "TCP", 00:19:45.090 "adrfam": "IPv4", 00:19:45.090 "traddr": "10.0.0.1", 00:19:45.090 "trsvcid": "56664" 00:19:45.090 }, 00:19:45.090 "auth": { 00:19:45.090 "state": "completed", 00:19:45.090 "digest": "sha256", 00:19:45.090 "dhgroup": "ffdhe6144" 00:19:45.090 } 00:19:45.090 } 00:19:45.090 ]' 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.090 16:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.349 16:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:46.285 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.543 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.801 16:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.367 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.367 { 00:19:47.367 "cntlid": 35, 00:19:47.367 "qid": 0, 00:19:47.367 "state": "enabled", 00:19:47.367 "listen_address": { 00:19:47.367 "trtype": "TCP", 00:19:47.367 "adrfam": "IPv4", 00:19:47.367 "traddr": "10.0.0.2", 00:19:47.367 "trsvcid": "4420" 00:19:47.367 }, 00:19:47.367 "peer_address": { 00:19:47.367 "trtype": "TCP", 00:19:47.367 "adrfam": "IPv4", 00:19:47.367 "traddr": "10.0.0.1", 00:19:47.367 "trsvcid": "56694" 00:19:47.367 }, 00:19:47.367 "auth": { 00:19:47.367 "state": "completed", 00:19:47.367 "digest": "sha256", 00:19:47.367 "dhgroup": "ffdhe6144" 00:19:47.367 } 00:19:47.367 } 00:19:47.367 ]' 00:19:47.367 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.625 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.882 16:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.819 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.078 16:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.673 00:19:49.673 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.673 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.673 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.930 { 00:19:49.930 "cntlid": 37, 00:19:49.930 "qid": 0, 00:19:49.930 "state": "enabled", 00:19:49.930 "listen_address": { 00:19:49.930 "trtype": "TCP", 00:19:49.930 "adrfam": "IPv4", 00:19:49.930 "traddr": "10.0.0.2", 00:19:49.930 "trsvcid": "4420" 00:19:49.930 }, 00:19:49.930 "peer_address": { 00:19:49.930 "trtype": "TCP", 00:19:49.930 "adrfam": "IPv4", 00:19:49.930 "traddr": "10.0.0.1", 00:19:49.930 "trsvcid": "38818" 00:19:49.930 }, 00:19:49.930 "auth": { 00:19:49.930 "state": "completed", 00:19:49.930 "digest": "sha256", 00:19:49.930 "dhgroup": "ffdhe6144" 00:19:49.930 } 00:19:49.930 } 00:19:49.930 ]' 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.930 16:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.187 16:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.142 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.400 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.964 00:19:52.221 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.221 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.221 16:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.221 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.478 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.479 { 00:19:52.479 "cntlid": 39, 00:19:52.479 "qid": 0, 00:19:52.479 "state": "enabled", 00:19:52.479 "listen_address": { 00:19:52.479 "trtype": "TCP", 00:19:52.479 "adrfam": "IPv4", 00:19:52.479 "traddr": "10.0.0.2", 00:19:52.479 "trsvcid": "4420" 00:19:52.479 }, 00:19:52.479 "peer_address": { 00:19:52.479 "trtype": "TCP", 00:19:52.479 "adrfam": "IPv4", 00:19:52.479 "traddr": "10.0.0.1", 00:19:52.479 "trsvcid": "38854" 00:19:52.479 }, 00:19:52.479 "auth": { 00:19:52.479 "state": "completed", 00:19:52.479 "digest": "sha256", 00:19:52.479 "dhgroup": "ffdhe6144" 00:19:52.479 } 00:19:52.479 } 00:19:52.479 ]' 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.479 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.736 16:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.669 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.927 16:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.858 00:19:54.858 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.858 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.858 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.116 { 00:19:55.116 "cntlid": 41, 00:19:55.116 "qid": 0, 00:19:55.116 "state": "enabled", 00:19:55.116 "listen_address": { 00:19:55.116 "trtype": "TCP", 00:19:55.116 "adrfam": "IPv4", 00:19:55.116 "traddr": "10.0.0.2", 00:19:55.116 "trsvcid": "4420" 00:19:55.116 }, 00:19:55.116 "peer_address": { 00:19:55.116 "trtype": "TCP", 00:19:55.116 "adrfam": "IPv4", 00:19:55.116 "traddr": "10.0.0.1", 00:19:55.116 "trsvcid": "38884" 00:19:55.116 }, 00:19:55.116 "auth": { 00:19:55.116 "state": "completed", 00:19:55.116 "digest": "sha256", 00:19:55.116 "dhgroup": "ffdhe8192" 00:19:55.116 } 00:19:55.116 } 00:19:55.116 ]' 00:19:55.116 16:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.116 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.116 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.116 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.116 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.374 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.374 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.374 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.374 16:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.307 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.872 16:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.804 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.804 { 00:19:57.804 "cntlid": 43, 00:19:57.804 "qid": 0, 00:19:57.804 "state": "enabled", 00:19:57.804 "listen_address": { 00:19:57.804 "trtype": "TCP", 00:19:57.804 "adrfam": "IPv4", 00:19:57.804 "traddr": "10.0.0.2", 00:19:57.804 "trsvcid": "4420" 00:19:57.804 }, 00:19:57.804 "peer_address": { 00:19:57.804 "trtype": "TCP", 00:19:57.804 "adrfam": "IPv4", 00:19:57.804 "traddr": "10.0.0.1", 00:19:57.804 "trsvcid": "38906" 00:19:57.804 }, 00:19:57.804 "auth": { 00:19:57.804 "state": "completed", 00:19:57.804 "digest": "sha256", 00:19:57.804 "dhgroup": "ffdhe8192" 00:19:57.804 } 00:19:57.804 } 00:19:57.804 ]' 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.804 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.062 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.062 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.062 16:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.319 16:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.251 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.508 16:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.447 00:20:00.447 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.447 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.447 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.704 { 00:20:00.704 "cntlid": 45, 00:20:00.704 "qid": 0, 00:20:00.704 "state": "enabled", 00:20:00.704 "listen_address": { 00:20:00.704 "trtype": "TCP", 00:20:00.704 "adrfam": "IPv4", 00:20:00.704 "traddr": "10.0.0.2", 00:20:00.704 "trsvcid": "4420" 00:20:00.704 }, 00:20:00.704 "peer_address": { 00:20:00.704 "trtype": "TCP", 00:20:00.704 "adrfam": "IPv4", 00:20:00.704 "traddr": "10.0.0.1", 00:20:00.704 "trsvcid": "50590" 00:20:00.704 }, 00:20:00.704 "auth": { 00:20:00.704 "state": "completed", 00:20:00.704 "digest": "sha256", 00:20:00.704 "dhgroup": "ffdhe8192" 00:20:00.704 } 00:20:00.704 } 00:20:00.704 ]' 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.704 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.961 16:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.894 16:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.459 16:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.026 00:20:03.284 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.284 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.284 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.541 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.541 { 00:20:03.541 "cntlid": 47, 00:20:03.541 "qid": 0, 00:20:03.541 "state": "enabled", 00:20:03.541 "listen_address": { 00:20:03.541 "trtype": "TCP", 00:20:03.541 "adrfam": "IPv4", 00:20:03.541 "traddr": "10.0.0.2", 00:20:03.541 "trsvcid": "4420" 00:20:03.541 }, 00:20:03.541 "peer_address": { 00:20:03.541 "trtype": "TCP", 00:20:03.541 "adrfam": "IPv4", 00:20:03.541 "traddr": "10.0.0.1", 00:20:03.541 "trsvcid": "50624" 00:20:03.541 }, 00:20:03.542 "auth": { 00:20:03.542 "state": "completed", 00:20:03.542 "digest": "sha256", 00:20:03.542 "dhgroup": "ffdhe8192" 00:20:03.542 } 00:20:03.542 } 00:20:03.542 ]' 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.542 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.800 16:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.733 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.991 16:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.255 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.518 { 00:20:05.518 "cntlid": 49, 00:20:05.518 "qid": 0, 00:20:05.518 "state": "enabled", 00:20:05.518 "listen_address": { 00:20:05.518 "trtype": "TCP", 00:20:05.518 "adrfam": "IPv4", 00:20:05.518 "traddr": "10.0.0.2", 00:20:05.518 "trsvcid": "4420" 00:20:05.518 }, 00:20:05.518 "peer_address": { 00:20:05.518 "trtype": "TCP", 00:20:05.518 "adrfam": "IPv4", 00:20:05.518 "traddr": "10.0.0.1", 00:20:05.518 "trsvcid": "50652" 00:20:05.518 }, 00:20:05.518 "auth": { 00:20:05.518 "state": "completed", 00:20:05.518 "digest": "sha384", 00:20:05.518 "dhgroup": "null" 00:20:05.518 } 00:20:05.518 } 00:20:05.518 ]' 00:20:05.518 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.776 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.033 16:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:06.967 16:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.225 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.791 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.791 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.791 { 00:20:07.791 "cntlid": 51, 00:20:07.791 "qid": 0, 00:20:07.791 "state": "enabled", 00:20:07.791 "listen_address": { 00:20:07.791 "trtype": "TCP", 00:20:07.791 "adrfam": "IPv4", 00:20:07.791 "traddr": "10.0.0.2", 00:20:07.791 "trsvcid": "4420" 00:20:07.791 }, 00:20:07.791 "peer_address": { 00:20:07.791 "trtype": "TCP", 00:20:07.791 "adrfam": "IPv4", 00:20:07.791 "traddr": "10.0.0.1", 00:20:07.791 "trsvcid": "43230" 00:20:07.791 }, 00:20:07.791 "auth": { 00:20:07.791 "state": "completed", 00:20:07.791 "digest": "sha384", 00:20:07.791 "dhgroup": "null" 00:20:07.791 } 00:20:07.791 } 00:20:07.791 ]' 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.050 16:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.308 16:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.241 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.499 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.757 00:20:09.757 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.757 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.757 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.015 { 00:20:10.015 "cntlid": 53, 00:20:10.015 "qid": 0, 00:20:10.015 "state": "enabled", 00:20:10.015 "listen_address": { 00:20:10.015 "trtype": "TCP", 00:20:10.015 "adrfam": "IPv4", 00:20:10.015 "traddr": "10.0.0.2", 00:20:10.015 "trsvcid": "4420" 00:20:10.015 }, 00:20:10.015 "peer_address": { 00:20:10.015 "trtype": "TCP", 00:20:10.015 "adrfam": "IPv4", 00:20:10.015 "traddr": "10.0.0.1", 00:20:10.015 "trsvcid": "43264" 00:20:10.015 }, 00:20:10.015 "auth": { 00:20:10.015 "state": "completed", 00:20:10.015 "digest": "sha384", 00:20:10.015 "dhgroup": "null" 00:20:10.015 } 00:20:10.015 } 00:20:10.015 ]' 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.015 16:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.272 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:10.272 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.272 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.272 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.272 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.528 16:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.528 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.785 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.043 00:20:12.043 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.043 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.043 16:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.301 { 00:20:12.301 "cntlid": 55, 00:20:12.301 "qid": 0, 00:20:12.301 "state": "enabled", 00:20:12.301 "listen_address": { 00:20:12.301 "trtype": "TCP", 00:20:12.301 "adrfam": "IPv4", 00:20:12.301 "traddr": "10.0.0.2", 00:20:12.301 "trsvcid": "4420" 00:20:12.301 }, 00:20:12.301 "peer_address": { 00:20:12.301 "trtype": "TCP", 00:20:12.301 "adrfam": "IPv4", 00:20:12.301 "traddr": "10.0.0.1", 00:20:12.301 "trsvcid": "43296" 00:20:12.301 }, 00:20:12.301 "auth": { 00:20:12.301 "state": "completed", 00:20:12.301 "digest": "sha384", 00:20:12.301 "dhgroup": "null" 00:20:12.301 } 00:20:12.301 } 00:20:12.301 ]' 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.301 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.558 16:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.490 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.748 16:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.312 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.312 { 00:20:14.312 "cntlid": 57, 00:20:14.312 "qid": 0, 00:20:14.312 "state": "enabled", 00:20:14.312 "listen_address": { 00:20:14.312 "trtype": "TCP", 00:20:14.312 "adrfam": "IPv4", 00:20:14.312 "traddr": "10.0.0.2", 00:20:14.312 "trsvcid": "4420" 00:20:14.312 }, 00:20:14.312 "peer_address": { 00:20:14.312 "trtype": "TCP", 00:20:14.312 "adrfam": "IPv4", 00:20:14.312 "traddr": "10.0.0.1", 00:20:14.312 "trsvcid": "43318" 00:20:14.312 }, 00:20:14.312 "auth": { 00:20:14.312 "state": "completed", 00:20:14.312 "digest": "sha384", 00:20:14.312 "dhgroup": "ffdhe2048" 00:20:14.312 } 00:20:14.312 } 00:20:14.312 ]' 00:20:14.312 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.570 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.828 16:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:15.759 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.760 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.016 16:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.272 00:20:16.272 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.272 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.272 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.528 { 00:20:16.528 "cntlid": 59, 00:20:16.528 "qid": 0, 00:20:16.528 "state": "enabled", 00:20:16.528 "listen_address": { 00:20:16.528 "trtype": "TCP", 00:20:16.528 "adrfam": "IPv4", 00:20:16.528 "traddr": "10.0.0.2", 00:20:16.528 "trsvcid": "4420" 00:20:16.528 }, 00:20:16.528 "peer_address": { 00:20:16.528 "trtype": "TCP", 00:20:16.528 "adrfam": "IPv4", 00:20:16.528 "traddr": "10.0.0.1", 00:20:16.528 "trsvcid": "43348" 00:20:16.528 }, 00:20:16.528 "auth": { 00:20:16.528 "state": "completed", 00:20:16.528 "digest": "sha384", 00:20:16.528 "dhgroup": "ffdhe2048" 00:20:16.528 } 00:20:16.528 } 00:20:16.528 ]' 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.528 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.784 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.784 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.784 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.784 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.784 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.041 16:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.972 16:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.228 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.485 00:20:18.485 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.485 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.485 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.743 { 00:20:18.743 "cntlid": 61, 00:20:18.743 "qid": 0, 00:20:18.743 "state": "enabled", 00:20:18.743 "listen_address": { 00:20:18.743 "trtype": "TCP", 00:20:18.743 "adrfam": "IPv4", 00:20:18.743 "traddr": "10.0.0.2", 00:20:18.743 "trsvcid": "4420" 00:20:18.743 }, 00:20:18.743 "peer_address": { 00:20:18.743 "trtype": "TCP", 00:20:18.743 "adrfam": "IPv4", 00:20:18.743 "traddr": "10.0.0.1", 00:20:18.743 "trsvcid": "36462" 00:20:18.743 }, 00:20:18.743 "auth": { 00:20:18.743 "state": "completed", 00:20:18.743 "digest": "sha384", 00:20:18.743 "dhgroup": "ffdhe2048" 00:20:18.743 } 00:20:18.743 } 00:20:18.743 ]' 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.743 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.000 16:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.372 16:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.372 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.630 00:20:20.630 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.630 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.630 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.888 { 00:20:20.888 "cntlid": 63, 00:20:20.888 "qid": 0, 00:20:20.888 "state": "enabled", 00:20:20.888 "listen_address": { 00:20:20.888 "trtype": "TCP", 00:20:20.888 "adrfam": "IPv4", 00:20:20.888 "traddr": "10.0.0.2", 00:20:20.888 "trsvcid": "4420" 00:20:20.888 }, 00:20:20.888 "peer_address": { 00:20:20.888 "trtype": "TCP", 00:20:20.888 "adrfam": "IPv4", 00:20:20.888 "traddr": "10.0.0.1", 00:20:20.888 "trsvcid": "36482" 00:20:20.888 }, 00:20:20.888 "auth": { 00:20:20.888 "state": "completed", 00:20:20.888 "digest": "sha384", 00:20:20.888 "dhgroup": "ffdhe2048" 00:20:20.888 } 00:20:20.888 } 00:20:20.888 ]' 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.888 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.145 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.145 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.145 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.145 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.145 16:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.403 16:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.336 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.594 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.852 00:20:22.852 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.852 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.852 16:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.110 { 00:20:23.110 "cntlid": 65, 00:20:23.110 "qid": 0, 00:20:23.110 "state": "enabled", 00:20:23.110 "listen_address": { 00:20:23.110 "trtype": "TCP", 00:20:23.110 "adrfam": "IPv4", 00:20:23.110 "traddr": "10.0.0.2", 00:20:23.110 "trsvcid": "4420" 00:20:23.110 }, 00:20:23.110 "peer_address": { 00:20:23.110 "trtype": "TCP", 00:20:23.110 "adrfam": "IPv4", 00:20:23.110 "traddr": "10.0.0.1", 00:20:23.110 "trsvcid": "36514" 00:20:23.110 }, 00:20:23.110 "auth": { 00:20:23.110 "state": "completed", 00:20:23.110 "digest": "sha384", 00:20:23.110 "dhgroup": "ffdhe3072" 00:20:23.110 } 00:20:23.110 } 00:20:23.110 ]' 00:20:23.110 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.368 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.626 16:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.559 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.817 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.075 00:20:25.075 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.075 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.075 16:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.333 { 00:20:25.333 "cntlid": 67, 00:20:25.333 "qid": 0, 00:20:25.333 "state": "enabled", 00:20:25.333 "listen_address": { 00:20:25.333 "trtype": "TCP", 00:20:25.333 "adrfam": "IPv4", 00:20:25.333 "traddr": "10.0.0.2", 00:20:25.333 "trsvcid": "4420" 00:20:25.333 }, 00:20:25.333 "peer_address": { 00:20:25.333 "trtype": "TCP", 00:20:25.333 "adrfam": "IPv4", 00:20:25.333 "traddr": "10.0.0.1", 00:20:25.333 "trsvcid": "36544" 00:20:25.333 }, 00:20:25.333 "auth": { 00:20:25.333 "state": "completed", 00:20:25.333 "digest": "sha384", 00:20:25.333 "dhgroup": "ffdhe3072" 00:20:25.333 } 00:20:25.333 } 00:20:25.333 ]' 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.333 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.591 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.591 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.591 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.591 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.591 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.848 16:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:26.789 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.047 16:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.305 00:20:27.305 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.305 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.305 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.563 { 00:20:27.563 "cntlid": 69, 00:20:27.563 "qid": 0, 00:20:27.563 "state": "enabled", 00:20:27.563 "listen_address": { 00:20:27.563 "trtype": "TCP", 00:20:27.563 "adrfam": "IPv4", 00:20:27.563 "traddr": "10.0.0.2", 00:20:27.563 "trsvcid": "4420" 00:20:27.563 }, 00:20:27.563 "peer_address": { 00:20:27.563 "trtype": "TCP", 00:20:27.563 "adrfam": "IPv4", 00:20:27.563 "traddr": "10.0.0.1", 00:20:27.563 "trsvcid": "36550" 00:20:27.563 }, 00:20:27.563 "auth": { 00:20:27.563 "state": "completed", 00:20:27.563 "digest": "sha384", 00:20:27.563 "dhgroup": "ffdhe3072" 00:20:27.563 } 00:20:27.563 } 00:20:27.563 ]' 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.563 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.821 16:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.754 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.012 16:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.578 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.578 16:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.837 { 00:20:29.837 "cntlid": 71, 00:20:29.837 "qid": 0, 00:20:29.837 "state": "enabled", 00:20:29.837 "listen_address": { 00:20:29.837 "trtype": "TCP", 00:20:29.837 "adrfam": "IPv4", 00:20:29.837 "traddr": "10.0.0.2", 00:20:29.837 "trsvcid": "4420" 00:20:29.837 }, 00:20:29.837 "peer_address": { 00:20:29.837 "trtype": "TCP", 00:20:29.837 "adrfam": "IPv4", 00:20:29.837 "traddr": "10.0.0.1", 00:20:29.837 "trsvcid": "52794" 00:20:29.837 }, 00:20:29.837 "auth": { 00:20:29.837 "state": "completed", 00:20:29.837 "digest": "sha384", 00:20:29.837 "dhgroup": "ffdhe3072" 00:20:29.837 } 00:20:29.837 } 00:20:29.837 ]' 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.837 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.095 16:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.026 16:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.284 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.849 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.849 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.106 { 00:20:32.106 "cntlid": 73, 00:20:32.106 "qid": 0, 00:20:32.106 "state": "enabled", 00:20:32.106 "listen_address": { 00:20:32.106 "trtype": "TCP", 00:20:32.106 "adrfam": "IPv4", 00:20:32.106 "traddr": "10.0.0.2", 00:20:32.106 "trsvcid": "4420" 00:20:32.106 }, 00:20:32.106 "peer_address": { 00:20:32.106 "trtype": "TCP", 00:20:32.106 "adrfam": "IPv4", 00:20:32.106 "traddr": "10.0.0.1", 00:20:32.106 "trsvcid": "52818" 00:20:32.106 }, 00:20:32.106 "auth": { 00:20:32.106 "state": "completed", 00:20:32.106 "digest": "sha384", 00:20:32.106 "dhgroup": "ffdhe4096" 00:20:32.106 } 00:20:32.106 } 00:20:32.106 ]' 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.106 16:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.364 16:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.312 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.568 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.569 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 16:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.569 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.569 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.132 00:20:34.132 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.132 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.132 16:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.132 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.132 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.132 16:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.132 16:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.389 { 00:20:34.389 "cntlid": 75, 00:20:34.389 "qid": 0, 00:20:34.389 "state": "enabled", 00:20:34.389 "listen_address": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "adrfam": "IPv4", 00:20:34.389 "traddr": "10.0.0.2", 00:20:34.389 "trsvcid": "4420" 00:20:34.389 }, 00:20:34.389 "peer_address": { 00:20:34.389 "trtype": "TCP", 00:20:34.389 "adrfam": "IPv4", 00:20:34.389 "traddr": "10.0.0.1", 00:20:34.389 "trsvcid": "52852" 00:20:34.389 }, 00:20:34.389 "auth": { 00:20:34.389 "state": "completed", 00:20:34.389 "digest": "sha384", 00:20:34.389 "dhgroup": "ffdhe4096" 00:20:34.389 } 00:20:34.389 } 00:20:34.389 ]' 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.389 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.647 16:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.579 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.837 16:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.403 00:20:36.403 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.403 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.403 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.661 { 00:20:36.661 "cntlid": 77, 00:20:36.661 "qid": 0, 00:20:36.661 "state": "enabled", 00:20:36.661 "listen_address": { 00:20:36.661 "trtype": "TCP", 00:20:36.661 "adrfam": "IPv4", 00:20:36.661 "traddr": "10.0.0.2", 00:20:36.661 "trsvcid": "4420" 00:20:36.661 }, 00:20:36.661 "peer_address": { 00:20:36.661 "trtype": "TCP", 00:20:36.661 "adrfam": "IPv4", 00:20:36.661 "traddr": "10.0.0.1", 00:20:36.661 "trsvcid": "52886" 00:20:36.661 }, 00:20:36.661 "auth": { 00:20:36.661 "state": "completed", 00:20:36.661 "digest": "sha384", 00:20:36.661 "dhgroup": "ffdhe4096" 00:20:36.661 } 00:20:36.661 } 00:20:36.661 ]' 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.661 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.919 16:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.851 16:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.109 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.676 00:20:38.676 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.676 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.676 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.934 { 00:20:38.934 "cntlid": 79, 00:20:38.934 "qid": 0, 00:20:38.934 "state": "enabled", 00:20:38.934 "listen_address": { 00:20:38.934 "trtype": "TCP", 00:20:38.934 "adrfam": "IPv4", 00:20:38.934 "traddr": "10.0.0.2", 00:20:38.934 "trsvcid": "4420" 00:20:38.934 }, 00:20:38.934 "peer_address": { 00:20:38.934 "trtype": "TCP", 00:20:38.934 "adrfam": "IPv4", 00:20:38.934 "traddr": "10.0.0.1", 00:20:38.934 "trsvcid": "46710" 00:20:38.934 }, 00:20:38.934 "auth": { 00:20:38.934 "state": "completed", 00:20:38.934 "digest": "sha384", 00:20:38.934 "dhgroup": "ffdhe4096" 00:20:38.934 } 00:20:38.934 } 00:20:38.934 ]' 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.934 16:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.192 16:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:40.126 16:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.126 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.410 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.411 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.986 00:20:40.986 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.986 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.986 16:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.243 { 00:20:41.243 "cntlid": 81, 00:20:41.243 "qid": 0, 00:20:41.243 "state": "enabled", 00:20:41.243 "listen_address": { 00:20:41.243 "trtype": "TCP", 00:20:41.243 "adrfam": "IPv4", 00:20:41.243 "traddr": "10.0.0.2", 00:20:41.243 "trsvcid": "4420" 00:20:41.243 }, 00:20:41.243 "peer_address": { 00:20:41.243 "trtype": "TCP", 00:20:41.243 "adrfam": "IPv4", 00:20:41.243 "traddr": "10.0.0.1", 00:20:41.243 "trsvcid": "46730" 00:20:41.243 }, 00:20:41.243 "auth": { 00:20:41.243 "state": "completed", 00:20:41.243 "digest": "sha384", 00:20:41.243 "dhgroup": "ffdhe6144" 00:20:41.243 } 00:20:41.243 } 00:20:41.243 ]' 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.243 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.501 16:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.874 16:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.440 00:20:43.440 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.440 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.440 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.698 { 00:20:43.698 "cntlid": 83, 00:20:43.698 "qid": 0, 00:20:43.698 "state": "enabled", 00:20:43.698 "listen_address": { 00:20:43.698 "trtype": "TCP", 00:20:43.698 "adrfam": "IPv4", 00:20:43.698 "traddr": "10.0.0.2", 00:20:43.698 "trsvcid": "4420" 00:20:43.698 }, 00:20:43.698 "peer_address": { 00:20:43.698 "trtype": "TCP", 00:20:43.698 "adrfam": "IPv4", 00:20:43.698 "traddr": "10.0.0.1", 00:20:43.698 "trsvcid": "46764" 00:20:43.698 }, 00:20:43.698 "auth": { 00:20:43.698 "state": "completed", 00:20:43.698 "digest": "sha384", 00:20:43.698 "dhgroup": "ffdhe6144" 00:20:43.698 } 00:20:43.698 } 00:20:43.698 ]' 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.698 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.956 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.956 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.956 16:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.328 16:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.328 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.329 16:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.329 16:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.329 16:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.329 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.329 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.931 00:20:45.931 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.931 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.931 16:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.189 { 00:20:46.189 "cntlid": 85, 00:20:46.189 "qid": 0, 00:20:46.189 "state": "enabled", 00:20:46.189 "listen_address": { 00:20:46.189 "trtype": "TCP", 00:20:46.189 "adrfam": "IPv4", 00:20:46.189 "traddr": "10.0.0.2", 00:20:46.189 "trsvcid": "4420" 00:20:46.189 }, 00:20:46.189 "peer_address": { 00:20:46.189 "trtype": "TCP", 00:20:46.189 "adrfam": "IPv4", 00:20:46.189 "traddr": "10.0.0.1", 00:20:46.189 "trsvcid": "46790" 00:20:46.189 }, 00:20:46.189 "auth": { 00:20:46.189 "state": "completed", 00:20:46.189 "digest": "sha384", 00:20:46.189 "dhgroup": "ffdhe6144" 00:20:46.189 } 00:20:46.189 } 00:20:46.189 ]' 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.189 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.447 16:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:47.381 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.637 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.894 16:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.460 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.460 16:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.718 { 00:20:48.718 "cntlid": 87, 00:20:48.718 "qid": 0, 00:20:48.718 "state": "enabled", 00:20:48.718 "listen_address": { 00:20:48.718 "trtype": "TCP", 00:20:48.718 "adrfam": "IPv4", 00:20:48.718 "traddr": "10.0.0.2", 00:20:48.718 "trsvcid": "4420" 00:20:48.718 }, 00:20:48.718 "peer_address": { 00:20:48.718 "trtype": "TCP", 00:20:48.718 "adrfam": "IPv4", 00:20:48.718 "traddr": "10.0.0.1", 00:20:48.718 "trsvcid": "60234" 00:20:48.718 }, 00:20:48.718 "auth": { 00:20:48.718 "state": "completed", 00:20:48.718 "digest": "sha384", 00:20:48.718 "dhgroup": "ffdhe6144" 00:20:48.718 } 00:20:48.718 } 00:20:48.718 ]' 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.718 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.975 16:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.906 16:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.163 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.092 00:20:51.092 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.092 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.092 16:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.349 { 00:20:51.349 "cntlid": 89, 00:20:51.349 "qid": 0, 00:20:51.349 "state": "enabled", 00:20:51.349 "listen_address": { 00:20:51.349 "trtype": "TCP", 00:20:51.349 "adrfam": "IPv4", 00:20:51.349 "traddr": "10.0.0.2", 00:20:51.349 "trsvcid": "4420" 00:20:51.349 }, 00:20:51.349 "peer_address": { 00:20:51.349 "trtype": "TCP", 00:20:51.349 "adrfam": "IPv4", 00:20:51.349 "traddr": "10.0.0.1", 00:20:51.349 "trsvcid": "60266" 00:20:51.349 }, 00:20:51.349 "auth": { 00:20:51.349 "state": "completed", 00:20:51.349 "digest": "sha384", 00:20:51.349 "dhgroup": "ffdhe8192" 00:20:51.349 } 00:20:51.349 } 00:20:51.349 ]' 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.349 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.911 16:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.841 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.098 16:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.030 00:20:54.030 16:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.030 16:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.030 16:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.287 { 00:20:54.287 "cntlid": 91, 00:20:54.287 "qid": 0, 00:20:54.287 "state": "enabled", 00:20:54.287 "listen_address": { 00:20:54.287 "trtype": "TCP", 00:20:54.287 "adrfam": "IPv4", 00:20:54.287 "traddr": "10.0.0.2", 00:20:54.287 "trsvcid": "4420" 00:20:54.287 }, 00:20:54.287 "peer_address": { 00:20:54.287 "trtype": "TCP", 00:20:54.287 "adrfam": "IPv4", 00:20:54.287 "traddr": "10.0.0.1", 00:20:54.287 "trsvcid": "60290" 00:20:54.287 }, 00:20:54.287 "auth": { 00:20:54.287 "state": "completed", 00:20:54.287 "digest": "sha384", 00:20:54.287 "dhgroup": "ffdhe8192" 00:20:54.287 } 00:20:54.287 } 00:20:54.287 ]' 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.287 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.544 16:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.475 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.732 16:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.664 00:20:56.664 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.664 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.664 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.921 { 00:20:56.921 "cntlid": 93, 00:20:56.921 "qid": 0, 00:20:56.921 "state": "enabled", 00:20:56.921 "listen_address": { 00:20:56.921 "trtype": "TCP", 00:20:56.921 "adrfam": "IPv4", 00:20:56.921 "traddr": "10.0.0.2", 00:20:56.921 "trsvcid": "4420" 00:20:56.921 }, 00:20:56.921 "peer_address": { 00:20:56.921 "trtype": "TCP", 00:20:56.921 "adrfam": "IPv4", 00:20:56.921 "traddr": "10.0.0.1", 00:20:56.921 "trsvcid": "60324" 00:20:56.921 }, 00:20:56.921 "auth": { 00:20:56.921 "state": "completed", 00:20:56.921 "digest": "sha384", 00:20:56.921 "dhgroup": "ffdhe8192" 00:20:56.921 } 00:20:56.921 } 00:20:56.921 ]' 00:20:56.921 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.178 16:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.435 16:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.367 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.625 16:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.560 00:20:59.560 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.560 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.560 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.817 { 00:20:59.817 "cntlid": 95, 00:20:59.817 "qid": 0, 00:20:59.817 "state": "enabled", 00:20:59.817 "listen_address": { 00:20:59.817 "trtype": "TCP", 00:20:59.817 "adrfam": "IPv4", 00:20:59.817 "traddr": "10.0.0.2", 00:20:59.817 "trsvcid": "4420" 00:20:59.817 }, 00:20:59.817 "peer_address": { 00:20:59.817 "trtype": "TCP", 00:20:59.817 "adrfam": "IPv4", 00:20:59.817 "traddr": "10.0.0.1", 00:20:59.817 "trsvcid": "33644" 00:20:59.817 }, 00:20:59.817 "auth": { 00:20:59.817 "state": "completed", 00:20:59.817 "digest": "sha384", 00:20:59.817 "dhgroup": "ffdhe8192" 00:20:59.817 } 00:20:59.817 } 00:20:59.817 ]' 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.817 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.075 16:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.007 16:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.265 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.522 00:21:01.779 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.779 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.779 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.779 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.780 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.780 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.780 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.037 { 00:21:02.037 "cntlid": 97, 00:21:02.037 "qid": 0, 00:21:02.037 "state": "enabled", 00:21:02.037 "listen_address": { 00:21:02.037 "trtype": "TCP", 00:21:02.037 "adrfam": "IPv4", 00:21:02.037 "traddr": "10.0.0.2", 00:21:02.037 "trsvcid": "4420" 00:21:02.037 }, 00:21:02.037 "peer_address": { 00:21:02.037 "trtype": "TCP", 00:21:02.037 "adrfam": "IPv4", 00:21:02.037 "traddr": "10.0.0.1", 00:21:02.037 "trsvcid": "33664" 00:21:02.037 }, 00:21:02.037 "auth": { 00:21:02.037 "state": "completed", 00:21:02.037 "digest": "sha512", 00:21:02.037 "dhgroup": "null" 00:21:02.037 } 00:21:02.037 } 00:21:02.037 ]' 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.037 16:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.295 16:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.229 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.486 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.744 00:21:03.744 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.744 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.744 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.002 { 00:21:04.002 "cntlid": 99, 00:21:04.002 "qid": 0, 00:21:04.002 "state": "enabled", 00:21:04.002 "listen_address": { 00:21:04.002 "trtype": "TCP", 00:21:04.002 "adrfam": "IPv4", 00:21:04.002 "traddr": "10.0.0.2", 00:21:04.002 "trsvcid": "4420" 00:21:04.002 }, 00:21:04.002 "peer_address": { 00:21:04.002 "trtype": "TCP", 00:21:04.002 "adrfam": "IPv4", 00:21:04.002 "traddr": "10.0.0.1", 00:21:04.002 "trsvcid": "33684" 00:21:04.002 }, 00:21:04.002 "auth": { 00:21:04.002 "state": "completed", 00:21:04.002 "digest": "sha512", 00:21:04.002 "dhgroup": "null" 00:21:04.002 } 00:21:04.002 } 00:21:04.002 ]' 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:04.002 16:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.260 16:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.260 16:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.260 16:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.518 16:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.450 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.706 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.963 00:21:05.963 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.963 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.963 16:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.221 { 00:21:06.221 "cntlid": 101, 00:21:06.221 "qid": 0, 00:21:06.221 "state": "enabled", 00:21:06.221 "listen_address": { 00:21:06.221 "trtype": "TCP", 00:21:06.221 "adrfam": "IPv4", 00:21:06.221 "traddr": "10.0.0.2", 00:21:06.221 "trsvcid": "4420" 00:21:06.221 }, 00:21:06.221 "peer_address": { 00:21:06.221 "trtype": "TCP", 00:21:06.221 "adrfam": "IPv4", 00:21:06.221 "traddr": "10.0.0.1", 00:21:06.221 "trsvcid": "33708" 00:21:06.221 }, 00:21:06.221 "auth": { 00:21:06.221 "state": "completed", 00:21:06.221 "digest": "sha512", 00:21:06.221 "dhgroup": "null" 00:21:06.221 } 00:21:06.221 } 00:21:06.221 ]' 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.221 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.478 16:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.409 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.665 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:07.665 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.665 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.665 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:07.665 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.666 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.923 00:21:08.180 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.180 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.180 16:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.180 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.180 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.180 16:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.180 16:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.437 { 00:21:08.437 "cntlid": 103, 00:21:08.437 "qid": 0, 00:21:08.437 "state": "enabled", 00:21:08.437 "listen_address": { 00:21:08.437 "trtype": "TCP", 00:21:08.437 "adrfam": "IPv4", 00:21:08.437 "traddr": "10.0.0.2", 00:21:08.437 "trsvcid": "4420" 00:21:08.437 }, 00:21:08.437 "peer_address": { 00:21:08.437 "trtype": "TCP", 00:21:08.437 "adrfam": "IPv4", 00:21:08.437 "traddr": "10.0.0.1", 00:21:08.437 "trsvcid": "42210" 00:21:08.437 }, 00:21:08.437 "auth": { 00:21:08.437 "state": "completed", 00:21:08.437 "digest": "sha512", 00:21:08.437 "dhgroup": "null" 00:21:08.437 } 00:21:08.437 } 00:21:08.437 ]' 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.437 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.693 16:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.620 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.878 16:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.441 00:21:10.441 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.441 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.441 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.698 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.698 { 00:21:10.698 "cntlid": 105, 00:21:10.698 "qid": 0, 00:21:10.698 "state": "enabled", 00:21:10.698 "listen_address": { 00:21:10.698 "trtype": "TCP", 00:21:10.698 "adrfam": "IPv4", 00:21:10.698 "traddr": "10.0.0.2", 00:21:10.698 "trsvcid": "4420" 00:21:10.698 }, 00:21:10.698 "peer_address": { 00:21:10.698 "trtype": "TCP", 00:21:10.698 "adrfam": "IPv4", 00:21:10.698 "traddr": "10.0.0.1", 00:21:10.698 "trsvcid": "42230" 00:21:10.698 }, 00:21:10.698 "auth": { 00:21:10.698 "state": "completed", 00:21:10.698 "digest": "sha512", 00:21:10.698 "dhgroup": "ffdhe2048" 00:21:10.699 } 00:21:10.699 } 00:21:10.699 ]' 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.699 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.956 16:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.890 16:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.455 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.714 00:21:12.714 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.714 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.714 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.972 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.972 { 00:21:12.972 "cntlid": 107, 00:21:12.972 "qid": 0, 00:21:12.972 "state": "enabled", 00:21:12.972 "listen_address": { 00:21:12.972 "trtype": "TCP", 00:21:12.972 "adrfam": "IPv4", 00:21:12.972 "traddr": "10.0.0.2", 00:21:12.972 "trsvcid": "4420" 00:21:12.972 }, 00:21:12.972 "peer_address": { 00:21:12.972 "trtype": "TCP", 00:21:12.972 "adrfam": "IPv4", 00:21:12.972 "traddr": "10.0.0.1", 00:21:12.972 "trsvcid": "42258" 00:21:12.972 }, 00:21:12.972 "auth": { 00:21:12.972 "state": "completed", 00:21:12.972 "digest": "sha512", 00:21:12.973 "dhgroup": "ffdhe2048" 00:21:12.973 } 00:21:12.973 } 00:21:12.973 ]' 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.973 16:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.231 16:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.165 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.422 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:14.422 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.422 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.423 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.689 00:21:14.998 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.998 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.998 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.999 { 00:21:14.999 "cntlid": 109, 00:21:14.999 "qid": 0, 00:21:14.999 "state": "enabled", 00:21:14.999 "listen_address": { 00:21:14.999 "trtype": "TCP", 00:21:14.999 "adrfam": "IPv4", 00:21:14.999 "traddr": "10.0.0.2", 00:21:14.999 "trsvcid": "4420" 00:21:14.999 }, 00:21:14.999 "peer_address": { 00:21:14.999 "trtype": "TCP", 00:21:14.999 "adrfam": "IPv4", 00:21:14.999 "traddr": "10.0.0.1", 00:21:14.999 "trsvcid": "42292" 00:21:14.999 }, 00:21:14.999 "auth": { 00:21:14.999 "state": "completed", 00:21:14.999 "digest": "sha512", 00:21:14.999 "dhgroup": "ffdhe2048" 00:21:14.999 } 00:21:14.999 } 00:21:14.999 ]' 00:21:14.999 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.280 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.280 16:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.280 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.280 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.280 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.280 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.280 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.539 16:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.474 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.732 16:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.298 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.298 { 00:21:17.298 "cntlid": 111, 00:21:17.298 "qid": 0, 00:21:17.298 "state": "enabled", 00:21:17.298 "listen_address": { 00:21:17.298 "trtype": "TCP", 00:21:17.298 "adrfam": "IPv4", 00:21:17.298 "traddr": "10.0.0.2", 00:21:17.298 "trsvcid": "4420" 00:21:17.298 }, 00:21:17.298 "peer_address": { 00:21:17.298 "trtype": "TCP", 00:21:17.298 "adrfam": "IPv4", 00:21:17.298 "traddr": "10.0.0.1", 00:21:17.298 "trsvcid": "42318" 00:21:17.298 }, 00:21:17.298 "auth": { 00:21:17.298 "state": "completed", 00:21:17.298 "digest": "sha512", 00:21:17.298 "dhgroup": "ffdhe2048" 00:21:17.298 } 00:21:17.298 } 00:21:17.298 ]' 00:21:17.298 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.556 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.814 16:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:18.752 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.010 16:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.574 00:21:19.574 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.574 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.574 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.830 { 00:21:19.830 "cntlid": 113, 00:21:19.830 "qid": 0, 00:21:19.830 "state": "enabled", 00:21:19.830 "listen_address": { 00:21:19.830 "trtype": "TCP", 00:21:19.830 "adrfam": "IPv4", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "trsvcid": "4420" 00:21:19.830 }, 00:21:19.830 "peer_address": { 00:21:19.830 "trtype": "TCP", 00:21:19.830 "adrfam": "IPv4", 00:21:19.830 "traddr": "10.0.0.1", 00:21:19.830 "trsvcid": "42954" 00:21:19.830 }, 00:21:19.830 "auth": { 00:21:19.830 "state": "completed", 00:21:19.830 "digest": "sha512", 00:21:19.830 "dhgroup": "ffdhe3072" 00:21:19.830 } 00:21:19.830 } 00:21:19.830 ]' 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.830 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.087 16:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.024 16:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.281 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.847 00:21:21.847 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.847 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.847 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.105 { 00:21:22.105 "cntlid": 115, 00:21:22.105 "qid": 0, 00:21:22.105 "state": "enabled", 00:21:22.105 "listen_address": { 00:21:22.105 "trtype": "TCP", 00:21:22.105 "adrfam": "IPv4", 00:21:22.105 "traddr": "10.0.0.2", 00:21:22.105 "trsvcid": "4420" 00:21:22.105 }, 00:21:22.105 "peer_address": { 00:21:22.105 "trtype": "TCP", 00:21:22.105 "adrfam": "IPv4", 00:21:22.105 "traddr": "10.0.0.1", 00:21:22.105 "trsvcid": "42972" 00:21:22.105 }, 00:21:22.105 "auth": { 00:21:22.105 "state": "completed", 00:21:22.105 "digest": "sha512", 00:21:22.105 "dhgroup": "ffdhe3072" 00:21:22.105 } 00:21:22.105 } 00:21:22.105 ]' 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.105 16:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.364 16:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:23.298 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.556 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.814 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.071 00:21:24.071 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.071 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.071 16:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.328 { 00:21:24.328 "cntlid": 117, 00:21:24.328 "qid": 0, 00:21:24.328 "state": "enabled", 00:21:24.328 "listen_address": { 00:21:24.328 "trtype": "TCP", 00:21:24.328 "adrfam": "IPv4", 00:21:24.328 "traddr": "10.0.0.2", 00:21:24.328 "trsvcid": "4420" 00:21:24.328 }, 00:21:24.328 "peer_address": { 00:21:24.328 "trtype": "TCP", 00:21:24.328 "adrfam": "IPv4", 00:21:24.328 "traddr": "10.0.0.1", 00:21:24.328 "trsvcid": "42996" 00:21:24.328 }, 00:21:24.328 "auth": { 00:21:24.328 "state": "completed", 00:21:24.328 "digest": "sha512", 00:21:24.328 "dhgroup": "ffdhe3072" 00:21:24.328 } 00:21:24.328 } 00:21:24.328 ]' 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.328 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.587 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.587 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.587 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.845 16:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.801 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.060 16:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.318 00:21:26.318 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.318 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.318 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.575 { 00:21:26.575 "cntlid": 119, 00:21:26.575 "qid": 0, 00:21:26.575 "state": "enabled", 00:21:26.575 "listen_address": { 00:21:26.575 "trtype": "TCP", 00:21:26.575 "adrfam": "IPv4", 00:21:26.575 "traddr": "10.0.0.2", 00:21:26.575 "trsvcid": "4420" 00:21:26.575 }, 00:21:26.575 "peer_address": { 00:21:26.575 "trtype": "TCP", 00:21:26.575 "adrfam": "IPv4", 00:21:26.575 "traddr": "10.0.0.1", 00:21:26.575 "trsvcid": "43034" 00:21:26.575 }, 00:21:26.575 "auth": { 00:21:26.575 "state": "completed", 00:21:26.575 "digest": "sha512", 00:21:26.575 "dhgroup": "ffdhe3072" 00:21:26.575 } 00:21:26.575 } 00:21:26.575 ]' 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.575 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.833 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.833 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.833 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.090 16:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.076 16:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.334 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.591 00:21:28.591 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.591 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.591 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.848 { 00:21:28.848 "cntlid": 121, 00:21:28.848 "qid": 0, 00:21:28.848 "state": "enabled", 00:21:28.848 "listen_address": { 00:21:28.848 "trtype": "TCP", 00:21:28.848 "adrfam": "IPv4", 00:21:28.848 "traddr": "10.0.0.2", 00:21:28.848 "trsvcid": "4420" 00:21:28.848 }, 00:21:28.848 "peer_address": { 00:21:28.848 "trtype": "TCP", 00:21:28.848 "adrfam": "IPv4", 00:21:28.848 "traddr": "10.0.0.1", 00:21:28.848 "trsvcid": "38364" 00:21:28.848 }, 00:21:28.848 "auth": { 00:21:28.848 "state": "completed", 00:21:28.848 "digest": "sha512", 00:21:28.848 "dhgroup": "ffdhe4096" 00:21:28.848 } 00:21:28.848 } 00:21:28.848 ]' 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.848 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.106 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.106 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.106 16:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.106 16:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:30.040 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.300 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.558 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.815 00:21:30.815 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.815 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.815 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.071 { 00:21:31.071 "cntlid": 123, 00:21:31.071 "qid": 0, 00:21:31.071 "state": "enabled", 00:21:31.071 "listen_address": { 00:21:31.071 "trtype": "TCP", 00:21:31.071 "adrfam": "IPv4", 00:21:31.071 "traddr": "10.0.0.2", 00:21:31.071 "trsvcid": "4420" 00:21:31.071 }, 00:21:31.071 "peer_address": { 00:21:31.071 "trtype": "TCP", 00:21:31.071 "adrfam": "IPv4", 00:21:31.071 "traddr": "10.0.0.1", 00:21:31.071 "trsvcid": "38380" 00:21:31.071 }, 00:21:31.071 "auth": { 00:21:31.071 "state": "completed", 00:21:31.071 "digest": "sha512", 00:21:31.071 "dhgroup": "ffdhe4096" 00:21:31.071 } 00:21:31.071 } 00:21:31.071 ]' 00:21:31.071 16:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.071 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.071 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.328 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.328 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.328 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.328 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.328 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.586 16:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:32.516 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.517 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.774 16:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.341 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.341 { 00:21:33.341 "cntlid": 125, 00:21:33.341 "qid": 0, 00:21:33.341 "state": "enabled", 00:21:33.341 "listen_address": { 00:21:33.341 "trtype": "TCP", 00:21:33.341 "adrfam": "IPv4", 00:21:33.341 "traddr": "10.0.0.2", 00:21:33.341 "trsvcid": "4420" 00:21:33.341 }, 00:21:33.341 "peer_address": { 00:21:33.341 "trtype": "TCP", 00:21:33.341 "adrfam": "IPv4", 00:21:33.341 "traddr": "10.0.0.1", 00:21:33.341 "trsvcid": "38412" 00:21:33.341 }, 00:21:33.341 "auth": { 00:21:33.341 "state": "completed", 00:21:33.341 "digest": "sha512", 00:21:33.341 "dhgroup": "ffdhe4096" 00:21:33.341 } 00:21:33.341 } 00:21:33.341 ]' 00:21:33.341 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.598 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.855 16:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.801 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.099 16:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.356 00:21:35.356 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.356 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.356 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.614 { 00:21:35.614 "cntlid": 127, 00:21:35.614 "qid": 0, 00:21:35.614 "state": "enabled", 00:21:35.614 "listen_address": { 00:21:35.614 "trtype": "TCP", 00:21:35.614 "adrfam": "IPv4", 00:21:35.614 "traddr": "10.0.0.2", 00:21:35.614 "trsvcid": "4420" 00:21:35.614 }, 00:21:35.614 "peer_address": { 00:21:35.614 "trtype": "TCP", 00:21:35.614 "adrfam": "IPv4", 00:21:35.614 "traddr": "10.0.0.1", 00:21:35.614 "trsvcid": "38450" 00:21:35.614 }, 00:21:35.614 "auth": { 00:21:35.614 "state": "completed", 00:21:35.614 "digest": "sha512", 00:21:35.614 "dhgroup": "ffdhe4096" 00:21:35.614 } 00:21:35.614 } 00:21:35.614 ]' 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.614 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.872 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.872 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.872 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.130 16:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.063 16:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.321 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.887 00:21:37.888 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.888 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.888 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.145 { 00:21:38.145 "cntlid": 129, 00:21:38.145 "qid": 0, 00:21:38.145 "state": "enabled", 00:21:38.145 "listen_address": { 00:21:38.145 "trtype": "TCP", 00:21:38.145 "adrfam": "IPv4", 00:21:38.145 "traddr": "10.0.0.2", 00:21:38.145 "trsvcid": "4420" 00:21:38.145 }, 00:21:38.145 "peer_address": { 00:21:38.145 "trtype": "TCP", 00:21:38.145 "adrfam": "IPv4", 00:21:38.145 "traddr": "10.0.0.1", 00:21:38.145 "trsvcid": "38480" 00:21:38.145 }, 00:21:38.145 "auth": { 00:21:38.145 "state": "completed", 00:21:38.145 "digest": "sha512", 00:21:38.145 "dhgroup": "ffdhe6144" 00:21:38.145 } 00:21:38.145 } 00:21:38.145 ]' 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.145 16:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.145 16:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.145 16:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.145 16:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.403 16:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.337 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.595 16:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.161 00:21:40.161 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.161 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.161 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.432 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.432 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.433 { 00:21:40.433 "cntlid": 131, 00:21:40.433 "qid": 0, 00:21:40.433 "state": "enabled", 00:21:40.433 "listen_address": { 00:21:40.433 "trtype": "TCP", 00:21:40.433 "adrfam": "IPv4", 00:21:40.433 "traddr": "10.0.0.2", 00:21:40.433 "trsvcid": "4420" 00:21:40.433 }, 00:21:40.433 "peer_address": { 00:21:40.433 "trtype": "TCP", 00:21:40.433 "adrfam": "IPv4", 00:21:40.433 "traddr": "10.0.0.1", 00:21:40.433 "trsvcid": "35284" 00:21:40.433 }, 00:21:40.433 "auth": { 00:21:40.433 "state": "completed", 00:21:40.433 "digest": "sha512", 00:21:40.433 "dhgroup": "ffdhe6144" 00:21:40.433 } 00:21:40.433 } 00:21:40.433 ]' 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.433 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.693 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.693 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.693 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.950 16:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.884 16:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.141 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.706 00:21:42.706 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.706 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.706 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.963 { 00:21:42.963 "cntlid": 133, 00:21:42.963 "qid": 0, 00:21:42.963 "state": "enabled", 00:21:42.963 "listen_address": { 00:21:42.963 "trtype": "TCP", 00:21:42.963 "adrfam": "IPv4", 00:21:42.963 "traddr": "10.0.0.2", 00:21:42.963 "trsvcid": "4420" 00:21:42.963 }, 00:21:42.963 "peer_address": { 00:21:42.963 "trtype": "TCP", 00:21:42.963 "adrfam": "IPv4", 00:21:42.963 "traddr": "10.0.0.1", 00:21:42.963 "trsvcid": "35314" 00:21:42.963 }, 00:21:42.963 "auth": { 00:21:42.963 "state": "completed", 00:21:42.963 "digest": "sha512", 00:21:42.963 "dhgroup": "ffdhe6144" 00:21:42.963 } 00:21:42.963 } 00:21:42.963 ]' 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.963 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.220 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.220 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.220 16:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.477 16:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.421 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.678 16:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.245 00:21:45.245 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.245 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.245 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.503 { 00:21:45.503 "cntlid": 135, 00:21:45.503 "qid": 0, 00:21:45.503 "state": "enabled", 00:21:45.503 "listen_address": { 00:21:45.503 "trtype": "TCP", 00:21:45.503 "adrfam": "IPv4", 00:21:45.503 "traddr": "10.0.0.2", 00:21:45.503 "trsvcid": "4420" 00:21:45.503 }, 00:21:45.503 "peer_address": { 00:21:45.503 "trtype": "TCP", 00:21:45.503 "adrfam": "IPv4", 00:21:45.503 "traddr": "10.0.0.1", 00:21:45.503 "trsvcid": "35336" 00:21:45.503 }, 00:21:45.503 "auth": { 00:21:45.503 "state": "completed", 00:21:45.503 "digest": "sha512", 00:21:45.503 "dhgroup": "ffdhe6144" 00:21:45.503 } 00:21:45.503 } 00:21:45.503 ]' 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.503 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.762 16:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.136 16:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.073 00:21:48.073 16:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.073 16:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.073 16:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.332 { 00:21:48.332 "cntlid": 137, 00:21:48.332 "qid": 0, 00:21:48.332 "state": "enabled", 00:21:48.332 "listen_address": { 00:21:48.332 "trtype": "TCP", 00:21:48.332 "adrfam": "IPv4", 00:21:48.332 "traddr": "10.0.0.2", 00:21:48.332 "trsvcid": "4420" 00:21:48.332 }, 00:21:48.332 "peer_address": { 00:21:48.332 "trtype": "TCP", 00:21:48.332 "adrfam": "IPv4", 00:21:48.332 "traddr": "10.0.0.1", 00:21:48.332 "trsvcid": "35376" 00:21:48.332 }, 00:21:48.332 "auth": { 00:21:48.332 "state": "completed", 00:21:48.332 "digest": "sha512", 00:21:48.332 "dhgroup": "ffdhe8192" 00:21:48.332 } 00:21:48.332 } 00:21:48.332 ]' 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.332 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.591 16:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:21:49.526 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.784 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.041 16:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.973 00:21:50.973 16:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.973 16:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.973 16:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.230 16:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.230 16:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.230 16:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.230 16:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.230 { 00:21:51.230 "cntlid": 139, 00:21:51.230 "qid": 0, 00:21:51.230 "state": "enabled", 00:21:51.230 "listen_address": { 00:21:51.230 "trtype": "TCP", 00:21:51.230 "adrfam": "IPv4", 00:21:51.230 "traddr": "10.0.0.2", 00:21:51.230 "trsvcid": "4420" 00:21:51.230 }, 00:21:51.230 "peer_address": { 00:21:51.230 "trtype": "TCP", 00:21:51.230 "adrfam": "IPv4", 00:21:51.230 "traddr": "10.0.0.1", 00:21:51.230 "trsvcid": "45456" 00:21:51.230 }, 00:21:51.230 "auth": { 00:21:51.230 "state": "completed", 00:21:51.230 "digest": "sha512", 00:21:51.230 "dhgroup": "ffdhe8192" 00:21:51.230 } 00:21:51.230 } 00:21:51.230 ]' 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.230 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.231 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.231 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.490 16:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2E3NTVhZTRhNzVlODQzMDZmMTQxY2EwOGRjNzhmOWYER/i+: --dhchap-ctrl-secret DHHC-1:02:Zjk1Njk4OTc0ODc3Y2FkMmNjNDg3NjQxMDNiYTkwODYxYWVlMWI3NWZkMWU1NzkwodYIJw==: 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.425 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.685 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.944 16:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.944 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.944 16:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.881 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.881 { 00:21:53.881 "cntlid": 141, 00:21:53.881 "qid": 0, 00:21:53.881 "state": "enabled", 00:21:53.881 "listen_address": { 00:21:53.881 "trtype": "TCP", 00:21:53.881 "adrfam": "IPv4", 00:21:53.881 "traddr": "10.0.0.2", 00:21:53.881 "trsvcid": "4420" 00:21:53.881 }, 00:21:53.881 "peer_address": { 00:21:53.881 "trtype": "TCP", 00:21:53.881 "adrfam": "IPv4", 00:21:53.881 "traddr": "10.0.0.1", 00:21:53.881 "trsvcid": "45474" 00:21:53.881 }, 00:21:53.881 "auth": { 00:21:53.881 "state": "completed", 00:21:53.881 "digest": "sha512", 00:21:53.881 "dhgroup": "ffdhe8192" 00:21:53.881 } 00:21:53.881 } 00:21:53.881 ]' 00:21:53.881 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.139 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.140 16:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.398 16:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OTE4ZmEzOWMxYmNmZTFlMGQ3NTA5MWVhZmM3MTMyMWNjYjFjZTQ4MDIzOTkyNTM0jvV8OA==: --dhchap-ctrl-secret DHHC-1:01:MDAyNDJlZmMyZTM0MzQ5YWMzYjczODc2ZTdlNDg2ZDlBTwLm: 00:21:55.332 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.333 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.590 16:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.525 00:21:56.525 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.525 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.525 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.781 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.781 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.781 16:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.782 { 00:21:56.782 "cntlid": 143, 00:21:56.782 "qid": 0, 00:21:56.782 "state": "enabled", 00:21:56.782 "listen_address": { 00:21:56.782 "trtype": "TCP", 00:21:56.782 "adrfam": "IPv4", 00:21:56.782 "traddr": "10.0.0.2", 00:21:56.782 "trsvcid": "4420" 00:21:56.782 }, 00:21:56.782 "peer_address": { 00:21:56.782 "trtype": "TCP", 00:21:56.782 "adrfam": "IPv4", 00:21:56.782 "traddr": "10.0.0.1", 00:21:56.782 "trsvcid": "45494" 00:21:56.782 }, 00:21:56.782 "auth": { 00:21:56.782 "state": "completed", 00:21:56.782 "digest": "sha512", 00:21:56.782 "dhgroup": "ffdhe8192" 00:21:56.782 } 00:21:56.782 } 00:21:56.782 ]' 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.782 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.038 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.038 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.038 16:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.295 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.228 16:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.485 16:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.486 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.486 16:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.447 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.447 { 00:21:59.447 "cntlid": 145, 00:21:59.447 "qid": 0, 00:21:59.447 "state": "enabled", 00:21:59.447 "listen_address": { 00:21:59.447 "trtype": "TCP", 00:21:59.447 "adrfam": "IPv4", 00:21:59.447 "traddr": "10.0.0.2", 00:21:59.447 "trsvcid": "4420" 00:21:59.447 }, 00:21:59.447 "peer_address": { 00:21:59.447 "trtype": "TCP", 00:21:59.447 "adrfam": "IPv4", 00:21:59.447 "traddr": "10.0.0.1", 00:21:59.447 "trsvcid": "55678" 00:21:59.447 }, 00:21:59.447 "auth": { 00:21:59.447 "state": "completed", 00:21:59.447 "digest": "sha512", 00:21:59.447 "dhgroup": "ffdhe8192" 00:21:59.447 } 00:21:59.447 } 00:21:59.447 ]' 00:21:59.447 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.716 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.973 16:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Yjk1ZThjMGY2ZDY3MmJkNGJjMWU1ODE1OGYzZGRiNDU4Zjk3MGYyYzI1MzEzNWVka+/a/Q==: --dhchap-ctrl-secret DHHC-1:03:MGQ3YzFkZjM2NWEzOTk1Y2U5ODhhZWM4MjFhM2JmNjI1NWFhNTY5MzMwODQ3YjM2ZGE3MzI2MmQ1YWUwZDQ5NO30sc4=: 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:00.907 16:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:01.841 request: 00:22:01.841 { 00:22:01.841 "name": "nvme0", 00:22:01.841 "trtype": "tcp", 00:22:01.841 "traddr": "10.0.0.2", 00:22:01.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:01.841 "adrfam": "ipv4", 00:22:01.841 "trsvcid": "4420", 00:22:01.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.841 "dhchap_key": "key2", 00:22:01.841 "method": "bdev_nvme_attach_controller", 00:22:01.841 "req_id": 1 00:22:01.841 } 00:22:01.841 Got JSON-RPC error response 00:22:01.841 response: 00:22:01.841 { 00:22:01.841 "code": -5, 00:22:01.841 "message": "Input/output error" 00:22:01.841 } 00:22:01.841 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.842 16:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.778 request: 00:22:02.778 { 00:22:02.778 "name": "nvme0", 00:22:02.778 "trtype": "tcp", 00:22:02.779 "traddr": "10.0.0.2", 00:22:02.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:02.779 "adrfam": "ipv4", 00:22:02.779 "trsvcid": "4420", 00:22:02.779 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.779 "dhchap_key": "key1", 00:22:02.779 "dhchap_ctrlr_key": "ckey2", 00:22:02.779 "method": "bdev_nvme_attach_controller", 00:22:02.779 "req_id": 1 00:22:02.779 } 00:22:02.779 Got JSON-RPC error response 00:22:02.779 response: 00:22:02.779 { 00:22:02.779 "code": -5, 00:22:02.779 "message": "Input/output error" 00:22:02.779 } 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.779 16:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.347 request: 00:22:03.347 { 00:22:03.347 "name": "nvme0", 00:22:03.347 "trtype": "tcp", 00:22:03.347 "traddr": "10.0.0.2", 00:22:03.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:03.347 "adrfam": "ipv4", 00:22:03.347 "trsvcid": "4420", 00:22:03.347 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.347 "dhchap_key": "key1", 00:22:03.347 "dhchap_ctrlr_key": "ckey1", 00:22:03.347 "method": "bdev_nvme_attach_controller", 00:22:03.347 "req_id": 1 00:22:03.347 } 00:22:03.347 Got JSON-RPC error response 00:22:03.347 response: 00:22:03.347 { 00:22:03.347 "code": -5, 00:22:03.347 "message": "Input/output error" 00:22:03.347 } 00:22:03.347 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:03.347 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:03.347 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:03.347 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 328829 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 328829 ']' 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 328829 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:03.606 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 328829 00:22:03.607 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:03.607 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:03.607 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 328829' 00:22:03.607 killing process with pid 328829 00:22:03.607 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 328829 00:22:03.607 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 328829 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=351323 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 351323 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 351323 ']' 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:03.865 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 351323 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 351323 ']' 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.122 16:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.380 16:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.312 00:22:05.312 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.312 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.312 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.569 { 00:22:05.569 "cntlid": 1, 00:22:05.569 "qid": 0, 00:22:05.569 "state": "enabled", 00:22:05.569 "listen_address": { 00:22:05.569 "trtype": "TCP", 00:22:05.569 "adrfam": "IPv4", 00:22:05.569 "traddr": "10.0.0.2", 00:22:05.569 "trsvcid": "4420" 00:22:05.569 }, 00:22:05.569 "peer_address": { 00:22:05.569 "trtype": "TCP", 00:22:05.569 "adrfam": "IPv4", 00:22:05.569 "traddr": "10.0.0.1", 00:22:05.569 "trsvcid": "55736" 00:22:05.569 }, 00:22:05.569 "auth": { 00:22:05.569 "state": "completed", 00:22:05.569 "digest": "sha512", 00:22:05.569 "dhgroup": "ffdhe8192" 00:22:05.569 } 00:22:05.569 } 00:22:05.569 ]' 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.569 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.828 16:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:M2FkMTM5YjlmYjg5NTgwOGNhZjBlYWY4YjExZjhhM2U5ZWUyM2QzMDAzOGEwMzU5MGU3ZmY2N2E3ZGFlZTQ4OL/gpsQ=: 00:22:06.761 16:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.761 16:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.761 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.761 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:07.019 16:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.276 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.534 request: 00:22:07.534 { 00:22:07.534 "name": "nvme0", 00:22:07.534 "trtype": "tcp", 00:22:07.534 "traddr": "10.0.0.2", 00:22:07.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:07.534 "adrfam": "ipv4", 00:22:07.534 "trsvcid": "4420", 00:22:07.534 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:07.534 "dhchap_key": "key3", 00:22:07.534 "method": "bdev_nvme_attach_controller", 00:22:07.534 "req_id": 1 00:22:07.534 } 00:22:07.534 Got JSON-RPC error response 00:22:07.534 response: 00:22:07.534 { 00:22:07.534 "code": -5, 00:22:07.534 "message": "Input/output error" 00:22:07.534 } 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:07.534 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.791 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.049 request: 00:22:08.049 { 00:22:08.049 "name": "nvme0", 00:22:08.049 "trtype": "tcp", 00:22:08.049 "traddr": "10.0.0.2", 00:22:08.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:08.049 "adrfam": "ipv4", 00:22:08.049 "trsvcid": "4420", 00:22:08.049 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.049 "dhchap_key": "key3", 00:22:08.049 "method": "bdev_nvme_attach_controller", 00:22:08.049 "req_id": 1 00:22:08.049 } 00:22:08.049 Got JSON-RPC error response 00:22:08.049 response: 00:22:08.049 { 00:22:08.049 "code": -5, 00:22:08.049 "message": "Input/output error" 00:22:08.049 } 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.049 16:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.306 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.564 request: 00:22:08.564 { 00:22:08.564 "name": "nvme0", 00:22:08.564 "trtype": "tcp", 00:22:08.564 "traddr": "10.0.0.2", 00:22:08.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:08.564 "adrfam": "ipv4", 00:22:08.564 "trsvcid": "4420", 00:22:08.564 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.564 "dhchap_key": "key0", 00:22:08.564 "dhchap_ctrlr_key": "key1", 00:22:08.564 "method": "bdev_nvme_attach_controller", 00:22:08.564 "req_id": 1 00:22:08.564 } 00:22:08.564 Got JSON-RPC error response 00:22:08.564 response: 00:22:08.564 { 00:22:08.564 "code": -5, 00:22:08.564 "message": "Input/output error" 00:22:08.564 } 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:08.564 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:08.823 00:22:09.081 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:09.081 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:09.081 16:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.081 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.081 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.081 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 328856 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 328856 ']' 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 328856 00:22:09.339 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 328856 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 328856' 00:22:09.596 killing process with pid 328856 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 328856 00:22:09.596 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 328856 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.854 rmmod nvme_tcp 00:22:09.854 rmmod nvme_fabrics 00:22:09.854 rmmod nvme_keyring 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 351323 ']' 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 351323 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 351323 ']' 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 351323 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 351323 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 351323' 00:22:09.854 killing process with pid 351323 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 351323 00:22:09.854 16:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 351323 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.113 16:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.647 16:20:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.648 16:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oBl /tmp/spdk.key-sha256.QeC /tmp/spdk.key-sha384.TBI /tmp/spdk.key-sha512.rVv /tmp/spdk.key-sha512.aVf /tmp/spdk.key-sha384.yVg /tmp/spdk.key-sha256.BRj '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:12.648 00:22:12.648 real 3m8.917s 00:22:12.648 user 7m20.116s 00:22:12.648 sys 0m24.890s 00:22:12.648 16:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:12.648 16:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.648 ************************************ 00:22:12.648 END TEST nvmf_auth_target 00:22:12.648 ************************************ 00:22:12.648 16:20:55 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:12.648 16:20:55 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.648 16:20:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:12.648 16:20:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:12.648 16:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.648 ************************************ 00:22:12.648 START TEST nvmf_bdevio_no_huge 00:22:12.648 ************************************ 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:12.648 * Looking for test storage... 00:22:12.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.648 16:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.602 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:14.603 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:14.603 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:14.603 Found net devices under 0000:84:00.0: cvl_0_0 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:14.603 Found net devices under 0000:84:00.1: cvl_0_1 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:22:14.603 00:22:14.603 --- 10.0.0.2 ping statistics --- 00:22:14.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.603 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:14.603 00:22:14.603 --- 10.0.0.1 ping statistics --- 00:22:14.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.603 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=354097 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 354097 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 354097 ']' 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.603 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.603 [2024-07-15 16:20:57.518081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:14.603 [2024-07-15 16:20:57.518183] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:14.861 [2024-07-15 16:20:57.593978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.861 [2024-07-15 16:20:57.677770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.861 [2024-07-15 16:20:57.677826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.861 [2024-07-15 16:20:57.677841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.861 [2024-07-15 16:20:57.677853] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.861 [2024-07-15 16:20:57.677864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.861 [2024-07-15 16:20:57.677917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:14.861 [2024-07-15 16:20:57.680756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:14.861 [2024-07-15 16:20:57.680838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.861 [2024-07-15 16:20:57.680834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 [2024-07-15 16:20:57.792247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 Malloc0 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.861 [2024-07-15 16:20:57.830102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:14.861 { 00:22:14.861 "params": { 00:22:14.861 "name": "Nvme$subsystem", 00:22:14.861 "trtype": "$TEST_TRANSPORT", 00:22:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.861 "adrfam": "ipv4", 00:22:14.861 "trsvcid": "$NVMF_PORT", 00:22:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.861 "hdgst": ${hdgst:-false}, 00:22:14.861 "ddgst": ${ddgst:-false} 00:22:14.861 }, 00:22:14.861 "method": "bdev_nvme_attach_controller" 00:22:14.861 } 00:22:14.861 EOF 00:22:14.861 )") 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:14.861 16:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:14.861 "params": { 00:22:14.861 "name": "Nvme1", 00:22:14.861 "trtype": "tcp", 00:22:14.861 "traddr": "10.0.0.2", 00:22:14.861 "adrfam": "ipv4", 00:22:14.861 "trsvcid": "4420", 00:22:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.861 "hdgst": false, 00:22:14.861 "ddgst": false 00:22:14.861 }, 00:22:14.861 "method": "bdev_nvme_attach_controller" 00:22:14.861 }' 00:22:15.118 [2024-07-15 16:20:57.873325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:15.118 [2024-07-15 16:20:57.873399] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid354132 ] 00:22:15.118 [2024-07-15 16:20:57.934335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.118 [2024-07-15 16:20:58.016728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.118 [2024-07-15 16:20:58.016776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.118 [2024-07-15 16:20:58.016779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.377 I/O targets: 00:22:15.377 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:15.377 00:22:15.377 00:22:15.377 CUnit - A unit testing framework for C - Version 2.1-3 00:22:15.377 http://cunit.sourceforge.net/ 00:22:15.377 00:22:15.377 00:22:15.377 Suite: bdevio tests on: Nvme1n1 00:22:15.377 Test: blockdev write read block ...passed 00:22:15.636 Test: blockdev write zeroes read block ...passed 00:22:15.636 Test: blockdev write zeroes read no split ...passed 00:22:15.636 Test: blockdev write zeroes read split ...passed 00:22:15.636 Test: blockdev write zeroes read split partial ...passed 00:22:15.636 Test: blockdev reset ...[2024-07-15 16:20:58.459287] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.636 [2024-07-15 16:20:58.459422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216f2b0 (9): Bad file descriptor 00:22:15.636 [2024-07-15 16:20:58.474863] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:15.636 passed 00:22:15.636 Test: blockdev write read 8 blocks ...passed 00:22:15.636 Test: blockdev write read size > 128k ...passed 00:22:15.636 Test: blockdev write read invalid size ...passed 00:22:15.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:15.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:15.636 Test: blockdev write read max offset ...passed 00:22:15.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:15.636 Test: blockdev writev readv 8 blocks ...passed 00:22:15.636 Test: blockdev writev readv 30 x 1block ...passed 00:22:15.895 Test: blockdev writev readv block ...passed 00:22:15.895 Test: blockdev writev readv size > 128k ...passed 00:22:15.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:15.895 Test: blockdev comparev and writev ...[2024-07-15 16:20:58.653062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.653100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.653131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.653149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.653613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.653638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.653660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.653675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.654152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.654198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.654214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.654712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.654735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.654765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:15.895 [2024-07-15 16:20:58.654781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:15.895 passed 00:22:15.895 Test: blockdev nvme passthru rw ...passed 00:22:15.895 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:20:58.738304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.895 [2024-07-15 16:20:58.738330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.738584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.895 [2024-07-15 16:20:58.738616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:15.895 [2024-07-15 16:20:58.738955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.895 [2024-07-15 16:20:58.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:15.896 [2024-07-15 16:20:58.739336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.896 [2024-07-15 16:20:58.739369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:15.896 passed 00:22:15.896 Test: blockdev nvme admin passthru ...passed 00:22:15.896 Test: blockdev copy ...passed 00:22:15.896 00:22:15.896 Run Summary: Type Total Ran Passed Failed Inactive 00:22:15.896 suites 1 1 n/a 0 0 00:22:15.896 tests 23 23 23 0 0 00:22:15.896 asserts 152 152 152 0 n/a 00:22:15.896 00:22:15.896 Elapsed time = 0.991 seconds 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.154 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.154 rmmod nvme_tcp 00:22:16.412 rmmod nvme_fabrics 00:22:16.412 rmmod nvme_keyring 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 354097 ']' 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 354097 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 354097 ']' 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 354097 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 354097 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 354097' 00:22:16.412 killing process with pid 354097 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 354097 00:22:16.412 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 354097 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.669 16:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.202 16:21:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:19.202 00:22:19.202 real 0m6.488s 00:22:19.202 user 0m10.204s 00:22:19.202 sys 0m2.545s 00:22:19.202 16:21:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:19.202 16:21:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:19.202 ************************************ 00:22:19.202 END TEST nvmf_bdevio_no_huge 00:22:19.202 ************************************ 00:22:19.202 16:21:01 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.202 16:21:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:19.202 16:21:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:19.202 16:21:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:19.202 ************************************ 00:22:19.202 START TEST nvmf_tls 00:22:19.202 ************************************ 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:19.202 * Looking for test storage... 00:22:19.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.202 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:19.203 16:21:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.105 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:21.106 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:21.106 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:21.106 Found net devices under 0000:84:00.0: cvl_0_0 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:21.106 Found net devices under 0000:84:00.1: cvl_0_1 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:22:21.106 00:22:21.106 --- 10.0.0.2 ping statistics --- 00:22:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.106 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:22:21.106 00:22:21.106 --- 10.0.0.1 ping statistics --- 00:22:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.106 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=356442 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 356442 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 356442 ']' 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.106 16:21:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.106 [2024-07-15 16:21:03.897131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:21.106 [2024-07-15 16:21:03.897217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.106 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.106 [2024-07-15 16:21:03.964181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.106 [2024-07-15 16:21:04.053818] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.106 [2024-07-15 16:21:04.053896] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.106 [2024-07-15 16:21:04.053910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.106 [2024-07-15 16:21:04.053921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.106 [2024-07-15 16:21:04.053931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.106 [2024-07-15 16:21:04.053963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:21.365 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:21.623 true 00:22:21.623 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.623 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:21.881 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:21.881 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:21.881 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.138 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.138 16:21:04 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:22.396 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:22.396 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:22.396 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:22.654 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.654 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:22.912 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:22.912 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:22.912 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:22.912 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:23.170 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:23.170 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:23.170 16:21:05 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:23.430 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.430 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:23.717 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:23.717 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:23.717 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:23.999 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:23.999 16:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.gFelRcw4SW 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.HI6y5ChPeQ 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.gFelRcw4SW 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HI6y5ChPeQ 00:22:24.257 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:24.514 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:25.080 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.gFelRcw4SW 00:22:25.080 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gFelRcw4SW 00:22:25.080 16:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.337 [2024-07-15 16:21:08.081243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.337 16:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.594 16:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.853 [2024-07-15 16:21:08.578587] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.853 [2024-07-15 16:21:08.578840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.853 16:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.853 malloc0 00:22:26.111 16:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.370 16:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFelRcw4SW 00:22:26.370 [2024-07-15 16:21:09.340265] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.628 16:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gFelRcw4SW 00:22:26.628 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.606 Initializing NVMe Controllers 00:22:36.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:36.606 Initialization complete. Launching workers. 00:22:36.606 ======================================================== 00:22:36.606 Latency(us) 00:22:36.606 Device Information : IOPS MiB/s Average min max 00:22:36.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8002.57 31.26 8000.03 1306.76 9377.07 00:22:36.606 ======================================================== 00:22:36.606 Total : 8002.57 31.26 8000.03 1306.76 9377.07 00:22:36.606 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFelRcw4SW 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gFelRcw4SW' 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=358721 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 358721 /var/tmp/bdevperf.sock 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 358721 ']' 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:36.606 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.606 [2024-07-15 16:21:19.511466] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:36.606 [2024-07-15 16:21:19.511543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358721 ] 00:22:36.606 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.606 [2024-07-15 16:21:19.573159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.865 [2024-07-15 16:21:19.658038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.865 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.865 16:21:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:36.865 16:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFelRcw4SW 00:22:37.123 [2024-07-15 16:21:19.986290] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.123 [2024-07-15 16:21:19.986419] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.123 TLSTESTn1 00:22:37.123 16:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.383 Running I/O for 10 seconds... 00:22:47.360 00:22:47.360 Latency(us) 00:22:47.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.360 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.360 Verification LBA range: start 0x0 length 0x2000 00:22:47.360 TLSTESTn1 : 10.02 3696.94 14.44 0.00 0.00 34563.70 5558.42 43302.31 00:22:47.360 =================================================================================================================== 00:22:47.360 Total : 3696.94 14.44 0.00 0.00 34563.70 5558.42 43302.31 00:22:47.360 0 00:22:47.360 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.360 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 358721 00:22:47.360 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 358721 ']' 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 358721 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 358721 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 358721' 00:22:47.361 killing process with pid 358721 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 358721 00:22:47.361 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.361 00:22:47.361 Latency(us) 00:22:47.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.361 =================================================================================================================== 00:22:47.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.361 [2024-07-15 16:21:30.273711] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.361 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 358721 00:22:47.620 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HI6y5ChPeQ 00:22:47.620 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HI6y5ChPeQ 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HI6y5ChPeQ 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HI6y5ChPeQ' 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=360032 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 360032 /var/tmp/bdevperf.sock 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360032 ']' 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.621 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.621 [2024-07-15 16:21:30.515488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:47.621 [2024-07-15 16:21:30.515569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360032 ] 00:22:47.621 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.621 [2024-07-15 16:21:30.575166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.879 [2024-07-15 16:21:30.668889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.879 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.879 16:21:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:47.879 16:21:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HI6y5ChPeQ 00:22:48.136 [2024-07-15 16:21:31.020979] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.136 [2024-07-15 16:21:31.021129] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.136 [2024-07-15 16:21:31.026588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.136 [2024-07-15 16:21:31.027081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1760 (107): Transport endpoint is not connected 00:22:48.136 [2024-07-15 16:21:31.028070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1760 (9): Bad file descriptor 00:22:48.136 [2024-07-15 16:21:31.029069] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.136 [2024-07-15 16:21:31.029104] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.136 [2024-07-15 16:21:31.029121] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.136 request: 00:22:48.136 { 00:22:48.136 "name": "TLSTEST", 00:22:48.136 "trtype": "tcp", 00:22:48.136 "traddr": "10.0.0.2", 00:22:48.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.136 "adrfam": "ipv4", 00:22:48.136 "trsvcid": "4420", 00:22:48.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.136 "psk": "/tmp/tmp.HI6y5ChPeQ", 00:22:48.136 "method": "bdev_nvme_attach_controller", 00:22:48.136 "req_id": 1 00:22:48.136 } 00:22:48.136 Got JSON-RPC error response 00:22:48.136 response: 00:22:48.136 { 00:22:48.136 "code": -5, 00:22:48.136 "message": "Input/output error" 00:22:48.136 } 00:22:48.136 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 360032 00:22:48.136 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360032 ']' 00:22:48.136 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360032 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360032 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360032' 00:22:48.137 killing process with pid 360032 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360032 00:22:48.137 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.137 00:22:48.137 Latency(us) 00:22:48.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.137 =================================================================================================================== 00:22:48.137 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.137 [2024-07-15 16:21:31.073632] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.137 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360032 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFelRcw4SW 00:22:48.394 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFelRcw4SW 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gFelRcw4SW 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gFelRcw4SW' 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=360168 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 360168 /var/tmp/bdevperf.sock 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360168 ']' 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.395 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.395 [2024-07-15 16:21:31.330940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:48.395 [2024-07-15 16:21:31.331016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360168 ] 00:22:48.395 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.653 [2024-07-15 16:21:31.389745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.653 [2024-07-15 16:21:31.471303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.653 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.653 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.653 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.gFelRcw4SW 00:22:48.912 [2024-07-15 16:21:31.792755] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.912 [2024-07-15 16:21:31.792891] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.912 [2024-07-15 16:21:31.798513] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:48.912 [2024-07-15 16:21:31.798543] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:48.912 [2024-07-15 16:21:31.798598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:48.912 [2024-07-15 16:21:31.798804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb760 (107): Transport endpoint is not connected 00:22:48.912 [2024-07-15 16:21:31.799791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb760 (9): Bad file descriptor 00:22:48.912 [2024-07-15 16:21:31.800790] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.912 [2024-07-15 16:21:31.800812] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:48.912 [2024-07-15 16:21:31.800852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.912 request: 00:22:48.912 { 00:22:48.912 "name": "TLSTEST", 00:22:48.912 "trtype": "tcp", 00:22:48.912 "traddr": "10.0.0.2", 00:22:48.912 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.912 "adrfam": "ipv4", 00:22:48.912 "trsvcid": "4420", 00:22:48.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.912 "psk": "/tmp/tmp.gFelRcw4SW", 00:22:48.912 "method": "bdev_nvme_attach_controller", 00:22:48.912 "req_id": 1 00:22:48.912 } 00:22:48.912 Got JSON-RPC error response 00:22:48.912 response: 00:22:48.912 { 00:22:48.912 "code": -5, 00:22:48.912 "message": "Input/output error" 00:22:48.912 } 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 360168 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360168 ']' 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360168 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360168 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360168' 00:22:48.912 killing process with pid 360168 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360168 00:22:48.912 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.912 00:22:48.912 Latency(us) 00:22:48.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.912 =================================================================================================================== 00:22:48.912 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.912 [2024-07-15 16:21:31.853442] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:48.912 16:21:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360168 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFelRcw4SW 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.169 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFelRcw4SW 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gFelRcw4SW 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gFelRcw4SW' 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=360188 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 360188 /var/tmp/bdevperf.sock 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360188 ']' 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.170 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.170 [2024-07-15 16:21:32.121321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:49.170 [2024-07-15 16:21:32.121396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360188 ] 00:22:49.428 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.428 [2024-07-15 16:21:32.181876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.428 [2024-07-15 16:21:32.269772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.428 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.428 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.428 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gFelRcw4SW 00:22:49.687 [2024-07-15 16:21:32.600553] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.687 [2024-07-15 16:21:32.600683] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.687 [2024-07-15 16:21:32.610543] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.687 [2024-07-15 16:21:32.610575] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:49.687 [2024-07-15 16:21:32.610632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.687 [2024-07-15 16:21:32.611520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c8760 (107): Transport endpoint is not connected 00:22:49.687 [2024-07-15 16:21:32.612511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c8760 (9): Bad file descriptor 00:22:49.687 [2024-07-15 16:21:32.613511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:49.687 [2024-07-15 16:21:32.613530] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.687 [2024-07-15 16:21:32.613562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:49.687 request: 00:22:49.687 { 00:22:49.687 "name": "TLSTEST", 00:22:49.687 "trtype": "tcp", 00:22:49.687 "traddr": "10.0.0.2", 00:22:49.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.687 "adrfam": "ipv4", 00:22:49.687 "trsvcid": "4420", 00:22:49.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:49.687 "psk": "/tmp/tmp.gFelRcw4SW", 00:22:49.687 "method": "bdev_nvme_attach_controller", 00:22:49.687 "req_id": 1 00:22:49.687 } 00:22:49.687 Got JSON-RPC error response 00:22:49.687 response: 00:22:49.687 { 00:22:49.687 "code": -5, 00:22:49.687 "message": "Input/output error" 00:22:49.687 } 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 360188 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360188 ']' 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360188 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360188 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360188' 00:22:49.687 killing process with pid 360188 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360188 00:22:49.687 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.687 00:22:49.687 Latency(us) 00:22:49.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.687 =================================================================================================================== 00:22:49.687 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.687 [2024-07-15 16:21:32.663930] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:49.687 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360188 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=360318 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 360318 /var/tmp/bdevperf.sock 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360318 ']' 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.945 16:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.203 [2024-07-15 16:21:32.931461] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:50.203 [2024-07-15 16:21:32.931538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360318 ] 00:22:50.203 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.203 [2024-07-15 16:21:32.990254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.203 [2024-07-15 16:21:33.072886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.203 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:50.203 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:50.203 16:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:50.768 [2024-07-15 16:21:33.442878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:50.768 [2024-07-15 16:21:33.444927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd74e10 (9): Bad file descriptor 00:22:50.768 [2024-07-15 16:21:33.445925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.768 [2024-07-15 16:21:33.445948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:50.768 [2024-07-15 16:21:33.445967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.768 request: 00:22:50.768 { 00:22:50.768 "name": "TLSTEST", 00:22:50.768 "trtype": "tcp", 00:22:50.768 "traddr": "10.0.0.2", 00:22:50.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.768 "adrfam": "ipv4", 00:22:50.768 "trsvcid": "4420", 00:22:50.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.768 "method": "bdev_nvme_attach_controller", 00:22:50.768 "req_id": 1 00:22:50.768 } 00:22:50.768 Got JSON-RPC error response 00:22:50.768 response: 00:22:50.768 { 00:22:50.768 "code": -5, 00:22:50.768 "message": "Input/output error" 00:22:50.768 } 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 360318 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360318 ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360318 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360318 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360318' 00:22:50.768 killing process with pid 360318 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360318 00:22:50.768 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.768 00:22:50.768 Latency(us) 00:22:50.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.768 =================================================================================================================== 00:22:50.768 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360318 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 356442 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 356442 ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 356442 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 356442 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 356442' 00:22:50.768 killing process with pid 356442 00:22:50.768 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 356442 00:22:50.769 [2024-07-15 16:21:33.740977] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.769 16:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 356442 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:51.027 16:21:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UTwCVTPOr7 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UTwCVTPOr7 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=360475 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 360475 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360475 ']' 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:51.285 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.286 [2024-07-15 16:21:34.089846] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:51.286 [2024-07-15 16:21:34.089925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.286 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.286 [2024-07-15 16:21:34.152734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.286 [2024-07-15 16:21:34.235605] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.286 [2024-07-15 16:21:34.235659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.286 [2024-07-15 16:21:34.235686] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.286 [2024-07-15 16:21:34.235697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.286 [2024-07-15 16:21:34.235706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.286 [2024-07-15 16:21:34.235732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UTwCVTPOr7 00:22:51.544 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.802 [2024-07-15 16:21:34.645355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.802 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.060 16:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.331 [2024-07-15 16:21:35.218887] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.331 [2024-07-15 16:21:35.219115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.331 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:52.588 malloc0 00:22:52.588 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.845 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:22:53.102 [2024-07-15 16:21:35.972328] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UTwCVTPOr7 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UTwCVTPOr7' 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=360758 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 360758 /var/tmp/bdevperf.sock 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 360758 ']' 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.102 16:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.102 [2024-07-15 16:21:36.034586] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:53.102 [2024-07-15 16:21:36.034657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360758 ] 00:22:53.102 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.359 [2024-07-15 16:21:36.093189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.359 [2024-07-15 16:21:36.176879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.359 16:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:53.359 16:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:53.360 16:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:22:53.617 [2024-07-15 16:21:36.495455] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.617 [2024-07-15 16:21:36.495591] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.617 TLSTESTn1 00:22:53.617 16:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.873 Running I/O for 10 seconds... 00:23:03.897 00:23:03.897 Latency(us) 00:23:03.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.897 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.897 Verification LBA range: start 0x0 length 0x2000 00:23:03.897 TLSTESTn1 : 10.03 3613.01 14.11 0.00 0.00 35361.11 5606.97 42137.22 00:23:03.897 =================================================================================================================== 00:23:03.897 Total : 3613.01 14.11 0.00 0.00 35361.11 5606.97 42137.22 00:23:03.897 0 00:23:03.897 16:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.897 16:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 360758 00:23:03.897 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360758 ']' 00:23:03.897 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360758 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360758 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360758' 00:23:03.898 killing process with pid 360758 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360758 00:23:03.898 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.898 00:23:03.898 Latency(us) 00:23:03.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.898 =================================================================================================================== 00:23:03.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.898 [2024-07-15 16:21:46.786132] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.898 16:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360758 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UTwCVTPOr7 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UTwCVTPOr7 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UTwCVTPOr7 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UTwCVTPOr7 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UTwCVTPOr7' 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=361960 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 361960 /var/tmp/bdevperf.sock 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 361960 ']' 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.156 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.156 [2024-07-15 16:21:47.057608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:04.156 [2024-07-15 16:21:47.057685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361960 ] 00:23:04.156 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.156 [2024-07-15 16:21:47.120786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.415 [2024-07-15 16:21:47.206886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.415 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:04.415 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:04.415 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:23:04.674 [2024-07-15 16:21:47.533705] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.674 [2024-07-15 16:21:47.533818] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:04.674 [2024-07-15 16:21:47.533834] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UTwCVTPOr7 00:23:04.674 request: 00:23:04.674 { 00:23:04.674 "name": "TLSTEST", 00:23:04.674 "trtype": "tcp", 00:23:04.674 "traddr": "10.0.0.2", 00:23:04.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.674 "adrfam": "ipv4", 00:23:04.674 "trsvcid": "4420", 00:23:04.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.674 "psk": "/tmp/tmp.UTwCVTPOr7", 00:23:04.674 "method": "bdev_nvme_attach_controller", 00:23:04.674 "req_id": 1 00:23:04.674 } 00:23:04.674 Got JSON-RPC error response 00:23:04.674 response: 00:23:04.674 { 00:23:04.674 "code": -1, 00:23:04.674 "message": "Operation not permitted" 00:23:04.674 } 00:23:04.674 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 361960 00:23:04.674 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 361960 ']' 00:23:04.674 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 361960 00:23:04.674 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.674 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 361960 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 361960' 00:23:04.675 killing process with pid 361960 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 361960 00:23:04.675 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.675 00:23:04.675 Latency(us) 00:23:04.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.675 =================================================================================================================== 00:23:04.675 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.675 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 361960 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 360475 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 360475 ']' 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 360475 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 360475 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 360475' 00:23:04.933 killing process with pid 360475 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 360475 00:23:04.933 [2024-07-15 16:21:47.827118] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.933 16:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 360475 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=362092 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 362092 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 362092 ']' 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.192 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.192 [2024-07-15 16:21:48.100637] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:05.192 [2024-07-15 16:21:48.100716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.192 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.498 [2024-07-15 16:21:48.172501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.498 [2024-07-15 16:21:48.266712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.498 [2024-07-15 16:21:48.266793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.498 [2024-07-15 16:21:48.266811] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.498 [2024-07-15 16:21:48.266825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.498 [2024-07-15 16:21:48.266837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.498 [2024-07-15 16:21:48.266873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UTwCVTPOr7 00:23:05.498 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:05.806 [2024-07-15 16:21:48.634491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.806 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:06.066 16:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:06.324 [2024-07-15 16:21:49.115814] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.324 [2024-07-15 16:21:49.116091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.324 16:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:06.583 malloc0 00:23:06.583 16:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.841 16:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:23:07.102 [2024-07-15 16:21:49.913727] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:07.102 [2024-07-15 16:21:49.913783] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:07.102 [2024-07-15 16:21:49.913832] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:07.102 request: 00:23:07.102 { 00:23:07.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.102 "host": "nqn.2016-06.io.spdk:host1", 00:23:07.102 "psk": "/tmp/tmp.UTwCVTPOr7", 00:23:07.102 "method": "nvmf_subsystem_add_host", 00:23:07.102 "req_id": 1 00:23:07.102 } 00:23:07.102 Got JSON-RPC error response 00:23:07.102 response: 00:23:07.102 { 00:23:07.102 "code": -32603, 00:23:07.102 "message": "Internal error" 00:23:07.102 } 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 362092 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 362092 ']' 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 362092 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362092 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362092' 00:23:07.102 killing process with pid 362092 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 362092 00:23:07.102 16:21:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 362092 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UTwCVTPOr7 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=362393 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 362393 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 362393 ']' 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:07.361 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.361 [2024-07-15 16:21:50.266942] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:07.361 [2024-07-15 16:21:50.267025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.361 [2024-07-15 16:21:50.329706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.619 [2024-07-15 16:21:50.421394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.619 [2024-07-15 16:21:50.421443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.619 [2024-07-15 16:21:50.421473] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.619 [2024-07-15 16:21:50.421485] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.619 [2024-07-15 16:21:50.421496] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.619 [2024-07-15 16:21:50.421535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UTwCVTPOr7 00:23:07.619 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.879 [2024-07-15 16:21:50.828991] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.879 16:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.138 16:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.396 [2024-07-15 16:21:51.354448] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.396 [2024-07-15 16:21:51.354705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.396 16:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:08.960 malloc0 00:23:08.960 16:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.960 16:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:23:09.217 [2024-07-15 16:21:52.143250] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=362678 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 362678 /var/tmp/bdevperf.sock 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 362678 ']' 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:09.217 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.476 [2024-07-15 16:21:52.201936] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:09.476 [2024-07-15 16:21:52.202007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362678 ] 00:23:09.476 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.476 [2024-07-15 16:21:52.259558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.476 [2024-07-15 16:21:52.343649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.734 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:09.734 16:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:09.734 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:23:09.991 [2024-07-15 16:21:52.731210] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.991 [2024-07-15 16:21:52.731325] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.991 TLSTESTn1 00:23:09.991 16:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:10.249 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:10.249 "subsystems": [ 00:23:10.249 { 00:23:10.249 "subsystem": "keyring", 00:23:10.249 "config": [] 00:23:10.249 }, 00:23:10.249 { 00:23:10.249 "subsystem": "iobuf", 00:23:10.249 "config": [ 00:23:10.249 { 00:23:10.249 "method": "iobuf_set_options", 00:23:10.249 "params": { 00:23:10.249 "small_pool_count": 8192, 00:23:10.249 "large_pool_count": 1024, 00:23:10.249 "small_bufsize": 8192, 00:23:10.249 "large_bufsize": 135168 00:23:10.249 } 00:23:10.249 } 00:23:10.249 ] 00:23:10.249 }, 00:23:10.249 { 00:23:10.249 "subsystem": "sock", 00:23:10.249 "config": [ 00:23:10.249 { 00:23:10.249 "method": "sock_set_default_impl", 00:23:10.249 "params": { 00:23:10.249 "impl_name": "posix" 00:23:10.249 } 00:23:10.249 }, 00:23:10.249 { 00:23:10.249 "method": "sock_impl_set_options", 00:23:10.249 "params": { 00:23:10.249 "impl_name": "ssl", 00:23:10.249 "recv_buf_size": 4096, 00:23:10.249 "send_buf_size": 4096, 00:23:10.249 "enable_recv_pipe": true, 00:23:10.249 "enable_quickack": false, 00:23:10.249 "enable_placement_id": 0, 00:23:10.249 "enable_zerocopy_send_server": true, 00:23:10.249 "enable_zerocopy_send_client": false, 00:23:10.249 "zerocopy_threshold": 0, 00:23:10.249 "tls_version": 0, 00:23:10.249 "enable_ktls": false 00:23:10.249 } 00:23:10.249 }, 00:23:10.249 { 00:23:10.249 "method": "sock_impl_set_options", 00:23:10.249 "params": { 00:23:10.250 "impl_name": "posix", 00:23:10.250 "recv_buf_size": 2097152, 00:23:10.250 "send_buf_size": 2097152, 00:23:10.250 "enable_recv_pipe": true, 00:23:10.250 "enable_quickack": false, 00:23:10.250 "enable_placement_id": 0, 00:23:10.250 "enable_zerocopy_send_server": true, 00:23:10.250 "enable_zerocopy_send_client": false, 00:23:10.250 "zerocopy_threshold": 0, 00:23:10.250 "tls_version": 0, 00:23:10.250 "enable_ktls": false 00:23:10.250 } 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "vmd", 00:23:10.250 "config": [] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "accel", 00:23:10.250 "config": [ 00:23:10.250 { 00:23:10.250 "method": "accel_set_options", 00:23:10.250 "params": { 00:23:10.250 "small_cache_size": 128, 00:23:10.250 "large_cache_size": 16, 00:23:10.250 "task_count": 2048, 00:23:10.250 "sequence_count": 2048, 00:23:10.250 "buf_count": 2048 00:23:10.250 } 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "bdev", 00:23:10.250 "config": [ 00:23:10.250 { 00:23:10.250 "method": "bdev_set_options", 00:23:10.250 "params": { 00:23:10.250 "bdev_io_pool_size": 65535, 00:23:10.250 "bdev_io_cache_size": 256, 00:23:10.250 "bdev_auto_examine": true, 00:23:10.250 "iobuf_small_cache_size": 128, 00:23:10.250 "iobuf_large_cache_size": 16 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_raid_set_options", 00:23:10.250 "params": { 00:23:10.250 "process_window_size_kb": 1024 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_iscsi_set_options", 00:23:10.250 "params": { 00:23:10.250 "timeout_sec": 30 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_nvme_set_options", 00:23:10.250 "params": { 00:23:10.250 "action_on_timeout": "none", 00:23:10.250 "timeout_us": 0, 00:23:10.250 "timeout_admin_us": 0, 00:23:10.250 "keep_alive_timeout_ms": 10000, 00:23:10.250 "arbitration_burst": 0, 00:23:10.250 "low_priority_weight": 0, 00:23:10.250 "medium_priority_weight": 0, 00:23:10.250 "high_priority_weight": 0, 00:23:10.250 "nvme_adminq_poll_period_us": 10000, 00:23:10.250 "nvme_ioq_poll_period_us": 0, 00:23:10.250 "io_queue_requests": 0, 00:23:10.250 "delay_cmd_submit": true, 00:23:10.250 "transport_retry_count": 4, 00:23:10.250 "bdev_retry_count": 3, 00:23:10.250 "transport_ack_timeout": 0, 00:23:10.250 "ctrlr_loss_timeout_sec": 0, 00:23:10.250 "reconnect_delay_sec": 0, 00:23:10.250 "fast_io_fail_timeout_sec": 0, 00:23:10.250 "disable_auto_failback": false, 00:23:10.250 "generate_uuids": false, 00:23:10.250 "transport_tos": 0, 00:23:10.250 "nvme_error_stat": false, 00:23:10.250 "rdma_srq_size": 0, 00:23:10.250 "io_path_stat": false, 00:23:10.250 "allow_accel_sequence": false, 00:23:10.250 "rdma_max_cq_size": 0, 00:23:10.250 "rdma_cm_event_timeout_ms": 0, 00:23:10.250 "dhchap_digests": [ 00:23:10.250 "sha256", 00:23:10.250 "sha384", 00:23:10.250 "sha512" 00:23:10.250 ], 00:23:10.250 "dhchap_dhgroups": [ 00:23:10.250 "null", 00:23:10.250 "ffdhe2048", 00:23:10.250 "ffdhe3072", 00:23:10.250 "ffdhe4096", 00:23:10.250 "ffdhe6144", 00:23:10.250 "ffdhe8192" 00:23:10.250 ] 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_nvme_set_hotplug", 00:23:10.250 "params": { 00:23:10.250 "period_us": 100000, 00:23:10.250 "enable": false 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_malloc_create", 00:23:10.250 "params": { 00:23:10.250 "name": "malloc0", 00:23:10.250 "num_blocks": 8192, 00:23:10.250 "block_size": 4096, 00:23:10.250 "physical_block_size": 4096, 00:23:10.250 "uuid": "ef1aa017-f125-49f8-8aa6-abacf38c1c0f", 00:23:10.250 "optimal_io_boundary": 0 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "bdev_wait_for_examine" 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "nbd", 00:23:10.250 "config": [] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "scheduler", 00:23:10.250 "config": [ 00:23:10.250 { 00:23:10.250 "method": "framework_set_scheduler", 00:23:10.250 "params": { 00:23:10.250 "name": "static" 00:23:10.250 } 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "subsystem": "nvmf", 00:23:10.250 "config": [ 00:23:10.250 { 00:23:10.250 "method": "nvmf_set_config", 00:23:10.250 "params": { 00:23:10.250 "discovery_filter": "match_any", 00:23:10.250 "admin_cmd_passthru": { 00:23:10.250 "identify_ctrlr": false 00:23:10.250 } 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_set_max_subsystems", 00:23:10.250 "params": { 00:23:10.250 "max_subsystems": 1024 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_set_crdt", 00:23:10.250 "params": { 00:23:10.250 "crdt1": 0, 00:23:10.250 "crdt2": 0, 00:23:10.250 "crdt3": 0 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_create_transport", 00:23:10.250 "params": { 00:23:10.250 "trtype": "TCP", 00:23:10.250 "max_queue_depth": 128, 00:23:10.250 "max_io_qpairs_per_ctrlr": 127, 00:23:10.250 "in_capsule_data_size": 4096, 00:23:10.250 "max_io_size": 131072, 00:23:10.250 "io_unit_size": 131072, 00:23:10.250 "max_aq_depth": 128, 00:23:10.250 "num_shared_buffers": 511, 00:23:10.250 "buf_cache_size": 4294967295, 00:23:10.250 "dif_insert_or_strip": false, 00:23:10.250 "zcopy": false, 00:23:10.250 "c2h_success": false, 00:23:10.250 "sock_priority": 0, 00:23:10.250 "abort_timeout_sec": 1, 00:23:10.250 "ack_timeout": 0, 00:23:10.250 "data_wr_pool_size": 0 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_create_subsystem", 00:23:10.250 "params": { 00:23:10.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.250 "allow_any_host": false, 00:23:10.250 "serial_number": "SPDK00000000000001", 00:23:10.250 "model_number": "SPDK bdev Controller", 00:23:10.250 "max_namespaces": 10, 00:23:10.250 "min_cntlid": 1, 00:23:10.250 "max_cntlid": 65519, 00:23:10.250 "ana_reporting": false 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_subsystem_add_host", 00:23:10.250 "params": { 00:23:10.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.250 "host": "nqn.2016-06.io.spdk:host1", 00:23:10.250 "psk": "/tmp/tmp.UTwCVTPOr7" 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_subsystem_add_ns", 00:23:10.250 "params": { 00:23:10.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.250 "namespace": { 00:23:10.250 "nsid": 1, 00:23:10.250 "bdev_name": "malloc0", 00:23:10.250 "nguid": "EF1AA017F12549F88AA6ABACF38C1C0F", 00:23:10.250 "uuid": "ef1aa017-f125-49f8-8aa6-abacf38c1c0f", 00:23:10.250 "no_auto_visible": false 00:23:10.250 } 00:23:10.250 } 00:23:10.250 }, 00:23:10.250 { 00:23:10.250 "method": "nvmf_subsystem_add_listener", 00:23:10.250 "params": { 00:23:10.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.250 "listen_address": { 00:23:10.250 "trtype": "TCP", 00:23:10.250 "adrfam": "IPv4", 00:23:10.250 "traddr": "10.0.0.2", 00:23:10.250 "trsvcid": "4420" 00:23:10.250 }, 00:23:10.250 "secure_channel": true 00:23:10.250 } 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 } 00:23:10.250 ] 00:23:10.250 }' 00:23:10.250 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:10.509 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:10.509 "subsystems": [ 00:23:10.509 { 00:23:10.509 "subsystem": "keyring", 00:23:10.509 "config": [] 00:23:10.509 }, 00:23:10.509 { 00:23:10.509 "subsystem": "iobuf", 00:23:10.509 "config": [ 00:23:10.509 { 00:23:10.509 "method": "iobuf_set_options", 00:23:10.509 "params": { 00:23:10.509 "small_pool_count": 8192, 00:23:10.509 "large_pool_count": 1024, 00:23:10.509 "small_bufsize": 8192, 00:23:10.509 "large_bufsize": 135168 00:23:10.509 } 00:23:10.509 } 00:23:10.509 ] 00:23:10.509 }, 00:23:10.509 { 00:23:10.509 "subsystem": "sock", 00:23:10.509 "config": [ 00:23:10.509 { 00:23:10.509 "method": "sock_set_default_impl", 00:23:10.509 "params": { 00:23:10.509 "impl_name": "posix" 00:23:10.509 } 00:23:10.509 }, 00:23:10.509 { 00:23:10.509 "method": "sock_impl_set_options", 00:23:10.509 "params": { 00:23:10.509 "impl_name": "ssl", 00:23:10.509 "recv_buf_size": 4096, 00:23:10.509 "send_buf_size": 4096, 00:23:10.509 "enable_recv_pipe": true, 00:23:10.509 "enable_quickack": false, 00:23:10.509 "enable_placement_id": 0, 00:23:10.509 "enable_zerocopy_send_server": true, 00:23:10.509 "enable_zerocopy_send_client": false, 00:23:10.509 "zerocopy_threshold": 0, 00:23:10.509 "tls_version": 0, 00:23:10.509 "enable_ktls": false 00:23:10.509 } 00:23:10.509 }, 00:23:10.509 { 00:23:10.509 "method": "sock_impl_set_options", 00:23:10.509 "params": { 00:23:10.509 "impl_name": "posix", 00:23:10.509 "recv_buf_size": 2097152, 00:23:10.509 "send_buf_size": 2097152, 00:23:10.509 "enable_recv_pipe": true, 00:23:10.509 "enable_quickack": false, 00:23:10.509 "enable_placement_id": 0, 00:23:10.509 "enable_zerocopy_send_server": true, 00:23:10.509 "enable_zerocopy_send_client": false, 00:23:10.509 "zerocopy_threshold": 0, 00:23:10.509 "tls_version": 0, 00:23:10.509 "enable_ktls": false 00:23:10.509 } 00:23:10.509 } 00:23:10.509 ] 00:23:10.509 }, 00:23:10.509 { 00:23:10.509 "subsystem": "vmd", 00:23:10.509 "config": [] 00:23:10.509 }, 00:23:10.509 { 00:23:10.510 "subsystem": "accel", 00:23:10.510 "config": [ 00:23:10.510 { 00:23:10.510 "method": "accel_set_options", 00:23:10.510 "params": { 00:23:10.510 "small_cache_size": 128, 00:23:10.510 "large_cache_size": 16, 00:23:10.510 "task_count": 2048, 00:23:10.510 "sequence_count": 2048, 00:23:10.510 "buf_count": 2048 00:23:10.510 } 00:23:10.510 } 00:23:10.510 ] 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "subsystem": "bdev", 00:23:10.510 "config": [ 00:23:10.510 { 00:23:10.510 "method": "bdev_set_options", 00:23:10.510 "params": { 00:23:10.510 "bdev_io_pool_size": 65535, 00:23:10.510 "bdev_io_cache_size": 256, 00:23:10.510 "bdev_auto_examine": true, 00:23:10.510 "iobuf_small_cache_size": 128, 00:23:10.510 "iobuf_large_cache_size": 16 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_raid_set_options", 00:23:10.510 "params": { 00:23:10.510 "process_window_size_kb": 1024 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_iscsi_set_options", 00:23:10.510 "params": { 00:23:10.510 "timeout_sec": 30 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_nvme_set_options", 00:23:10.510 "params": { 00:23:10.510 "action_on_timeout": "none", 00:23:10.510 "timeout_us": 0, 00:23:10.510 "timeout_admin_us": 0, 00:23:10.510 "keep_alive_timeout_ms": 10000, 00:23:10.510 "arbitration_burst": 0, 00:23:10.510 "low_priority_weight": 0, 00:23:10.510 "medium_priority_weight": 0, 00:23:10.510 "high_priority_weight": 0, 00:23:10.510 "nvme_adminq_poll_period_us": 10000, 00:23:10.510 "nvme_ioq_poll_period_us": 0, 00:23:10.510 "io_queue_requests": 512, 00:23:10.510 "delay_cmd_submit": true, 00:23:10.510 "transport_retry_count": 4, 00:23:10.510 "bdev_retry_count": 3, 00:23:10.510 "transport_ack_timeout": 0, 00:23:10.510 "ctrlr_loss_timeout_sec": 0, 00:23:10.510 "reconnect_delay_sec": 0, 00:23:10.510 "fast_io_fail_timeout_sec": 0, 00:23:10.510 "disable_auto_failback": false, 00:23:10.510 "generate_uuids": false, 00:23:10.510 "transport_tos": 0, 00:23:10.510 "nvme_error_stat": false, 00:23:10.510 "rdma_srq_size": 0, 00:23:10.510 "io_path_stat": false, 00:23:10.510 "allow_accel_sequence": false, 00:23:10.510 "rdma_max_cq_size": 0, 00:23:10.510 "rdma_cm_event_timeout_ms": 0, 00:23:10.510 "dhchap_digests": [ 00:23:10.510 "sha256", 00:23:10.510 "sha384", 00:23:10.510 "sha512" 00:23:10.510 ], 00:23:10.510 "dhchap_dhgroups": [ 00:23:10.510 "null", 00:23:10.510 "ffdhe2048", 00:23:10.510 "ffdhe3072", 00:23:10.510 "ffdhe4096", 00:23:10.510 "ffdhe6144", 00:23:10.510 "ffdhe8192" 00:23:10.510 ] 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_nvme_attach_controller", 00:23:10.510 "params": { 00:23:10.510 "name": "TLSTEST", 00:23:10.510 "trtype": "TCP", 00:23:10.510 "adrfam": "IPv4", 00:23:10.510 "traddr": "10.0.0.2", 00:23:10.510 "trsvcid": "4420", 00:23:10.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.510 "prchk_reftag": false, 00:23:10.510 "prchk_guard": false, 00:23:10.510 "ctrlr_loss_timeout_sec": 0, 00:23:10.510 "reconnect_delay_sec": 0, 00:23:10.510 "fast_io_fail_timeout_sec": 0, 00:23:10.510 "psk": "/tmp/tmp.UTwCVTPOr7", 00:23:10.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.510 "hdgst": false, 00:23:10.510 "ddgst": false 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_nvme_set_hotplug", 00:23:10.510 "params": { 00:23:10.510 "period_us": 100000, 00:23:10.510 "enable": false 00:23:10.510 } 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "method": "bdev_wait_for_examine" 00:23:10.510 } 00:23:10.510 ] 00:23:10.510 }, 00:23:10.510 { 00:23:10.510 "subsystem": "nbd", 00:23:10.510 "config": [] 00:23:10.510 } 00:23:10.510 ] 00:23:10.510 }' 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 362678 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 362678 ']' 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 362678 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362678 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362678' 00:23:10.510 killing process with pid 362678 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 362678 00:23:10.510 Received shutdown signal, test time was about 10.000000 seconds 00:23:10.510 00:23:10.510 Latency(us) 00:23:10.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.510 =================================================================================================================== 00:23:10.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.510 [2024-07-15 16:21:53.471208] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:10.510 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 362678 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 362393 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 362393 ']' 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 362393 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362393 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362393' 00:23:10.768 killing process with pid 362393 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 362393 00:23:10.768 [2024-07-15 16:21:53.694438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.768 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 362393 00:23:11.026 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:11.026 16:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.026 16:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:11.026 "subsystems": [ 00:23:11.027 { 00:23:11.027 "subsystem": "keyring", 00:23:11.027 "config": [] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "iobuf", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "iobuf_set_options", 00:23:11.027 "params": { 00:23:11.027 "small_pool_count": 8192, 00:23:11.027 "large_pool_count": 1024, 00:23:11.027 "small_bufsize": 8192, 00:23:11.027 "large_bufsize": 135168 00:23:11.027 } 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "sock", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "sock_set_default_impl", 00:23:11.027 "params": { 00:23:11.027 "impl_name": "posix" 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "sock_impl_set_options", 00:23:11.027 "params": { 00:23:11.027 "impl_name": "ssl", 00:23:11.027 "recv_buf_size": 4096, 00:23:11.027 "send_buf_size": 4096, 00:23:11.027 "enable_recv_pipe": true, 00:23:11.027 "enable_quickack": false, 00:23:11.027 "enable_placement_id": 0, 00:23:11.027 "enable_zerocopy_send_server": true, 00:23:11.027 "enable_zerocopy_send_client": false, 00:23:11.027 "zerocopy_threshold": 0, 00:23:11.027 "tls_version": 0, 00:23:11.027 "enable_ktls": false 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "sock_impl_set_options", 00:23:11.027 "params": { 00:23:11.027 "impl_name": "posix", 00:23:11.027 "recv_buf_size": 2097152, 00:23:11.027 "send_buf_size": 2097152, 00:23:11.027 "enable_recv_pipe": true, 00:23:11.027 "enable_quickack": false, 00:23:11.027 "enable_placement_id": 0, 00:23:11.027 "enable_zerocopy_send_server": true, 00:23:11.027 "enable_zerocopy_send_client": false, 00:23:11.027 "zerocopy_threshold": 0, 00:23:11.027 "tls_version": 0, 00:23:11.027 "enable_ktls": false 00:23:11.027 } 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "vmd", 00:23:11.027 "config": [] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "accel", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "accel_set_options", 00:23:11.027 "params": { 00:23:11.027 "small_cache_size": 128, 00:23:11.027 "large_cache_size": 16, 00:23:11.027 "task_count": 2048, 00:23:11.027 "sequence_count": 2048, 00:23:11.027 "buf_count": 2048 00:23:11.027 } 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "bdev", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "bdev_set_options", 00:23:11.027 "params": { 00:23:11.027 "bdev_io_pool_size": 65535, 00:23:11.027 "bdev_io_cache_size": 256, 00:23:11.027 "bdev_auto_examine": true, 00:23:11.027 "iobuf_small_cache_size": 128, 00:23:11.027 "iobuf_large_cache_size": 16 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_raid_set_options", 00:23:11.027 "params": { 00:23:11.027 "process_window_size_kb": 1024 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_iscsi_set_options", 00:23:11.027 "params": { 00:23:11.027 "timeout_sec": 30 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_nvme_set_options", 00:23:11.027 "params": { 00:23:11.027 "action_on_timeout": "none", 00:23:11.027 "timeout_us": 0, 00:23:11.027 "timeout_admin_us": 0, 00:23:11.027 "keep_alive_timeout_ms": 10000, 00:23:11.027 "arbitration_burst": 0, 00:23:11.027 "low_priority_weight": 0, 00:23:11.027 "medium_priority_weight": 0, 00:23:11.027 "high_priority_weight": 0, 00:23:11.027 "nvme_adminq_poll_period_us": 10000, 00:23:11.027 "nvme_ioq_poll_period_us": 0, 00:23:11.027 "io_queue_requests": 0, 00:23:11.027 "delay_cmd_submit": true, 00:23:11.027 "transport_retry_count": 4, 00:23:11.027 "bdev_retry_count": 3, 00:23:11.027 "transport_ack_timeout": 0, 00:23:11.027 "ctrlr_loss_timeout_sec": 0, 00:23:11.027 "reconnect_delay_sec": 0, 00:23:11.027 "fast_io_fail_timeout_sec": 0, 00:23:11.027 "disable_auto_failback": false, 00:23:11.027 "generate_uuids": false, 00:23:11.027 "transport_tos": 0, 00:23:11.027 "nvme_error_stat": false, 00:23:11.027 "rdma_srq_size": 0, 00:23:11.027 "io_path_stat": false, 00:23:11.027 "allow_accel_sequence": false, 00:23:11.027 "rdma_max_cq_size": 0, 00:23:11.027 "rdma_cm_event_timeout_ms": 0, 00:23:11.027 "dhchap_digests": [ 00:23:11.027 "sha256", 00:23:11.027 "sha384", 00:23:11.027 "sha512" 00:23:11.027 ], 00:23:11.027 "dhchap_dhgroups": [ 00:23:11.027 "null", 00:23:11.027 "ffdhe2048", 00:23:11.027 "ffdhe3072", 00:23:11.027 "ffdhe4096", 00:23:11.027 "ffdhe6144", 00:23:11.027 "ffdhe8192" 00:23:11.027 ] 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_nvme_set_hotplug", 00:23:11.027 "params": { 00:23:11.027 "period_us": 100000, 00:23:11.027 "enable": false 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_malloc_create", 00:23:11.027 "params": { 00:23:11.027 "name": "malloc0", 00:23:11.027 "num_blocks": 8192, 00:23:11.027 "block_size": 4096, 00:23:11.027 "physical_block_size": 4096, 00:23:11.027 "uuid": "ef1aa017-f125-49f8-8aa6-abacf38c1c0f", 00:23:11.027 "optimal_io_boundary": 0 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "bdev_wait_for_examine" 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "nbd", 00:23:11.027 "config": [] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "scheduler", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "framework_set_scheduler", 00:23:11.027 "params": { 00:23:11.027 "name": "static" 00:23:11.027 } 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "subsystem": "nvmf", 00:23:11.027 "config": [ 00:23:11.027 { 00:23:11.027 "method": "nvmf_set_config", 00:23:11.027 "params": { 00:23:11.027 "discovery_filter": "match_any", 00:23:11.027 "admin_cmd_passthru": { 00:23:11.027 "identify_ctrlr": false 00:23:11.027 } 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "nvmf_set_max_subsystems", 00:23:11.027 "params": { 00:23:11.027 "max_subsystems": 1024 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "nvmf_set_crdt", 00:23:11.027 "params": { 00:23:11.027 "crdt1": 0, 00:23:11.027 "crdt2": 0, 00:23:11.027 "crdt3": 0 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "nvmf_create_transport", 00:23:11.027 "params": { 00:23:11.027 "trtype": "TCP", 00:23:11.027 "max_queue_depth": 128, 00:23:11.027 "max_io_qpairs_per_ctrlr": 127, 00:23:11.027 "in_capsule_data_size": 4096, 00:23:11.027 "max_io_size": 131072, 00:23:11.027 "io_unit_size": 131072, 00:23:11.027 "max_aq_depth": 128, 00:23:11.027 "num_shared_buffers": 511, 00:23:11.027 "buf_cache_size": 4294967295, 00:23:11.027 "dif_insert_or_strip": false, 00:23:11.027 "zcopy": false, 00:23:11.027 "c2h_success": false, 00:23:11.027 "sock_priority": 0, 00:23:11.027 "abort_timeout_sec": 1, 00:23:11.027 "ack_timeout": 0, 00:23:11.027 "data_wr_pool_size": 0 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "nvmf_create_subsystem", 00:23:11.027 "params": { 00:23:11.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.027 "allow_any_host": false, 00:23:11.027 "serial_number": "SPDK00000000000001", 00:23:11.027 "model_number": "SPDK bdev Controller", 00:23:11.027 "max_namespaces": 10, 00:23:11.027 "min_cntlid": 1, 00:23:11.027 "max_cntlid": 65519, 00:23:11.027 "ana_reporting": false 00:23:11.027 } 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "method": "nvmf_subsystem_add_host", 00:23:11.027 "params": { 00:23:11.027 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.027 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.027 "psk": "/tmp/tmp.UTwCVTPOr7" 00:23:11.027 } 00:23:11.027 }, 00:23:11.028 { 00:23:11.028 "method": "nvmf_subsystem_add_ns", 00:23:11.028 "params": { 00:23:11.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.028 "namespace": { 00:23:11.028 "nsid": 1, 00:23:11.028 "bdev_name": "malloc0", 00:23:11.028 "nguid": "EF1AA017F12549F88AA6ABACF38C1C0F", 00:23:11.028 "uuid": "ef1aa017-f125-49f8-8aa6-abacf38c1c0f", 00:23:11.028 "no_auto_visible": false 00:23:11.028 } 00:23:11.028 } 00:23:11.028 }, 00:23:11.028 { 00:23:11.028 "method": "nvmf_subsystem_add_listener", 00:23:11.028 "params": { 00:23:11.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.028 "listen_address": { 00:23:11.028 "trtype": "TCP", 00:23:11.028 "adrfam": "IPv4", 00:23:11.028 "traddr": "10.0.0.2", 00:23:11.028 "trsvcid": "4420" 00:23:11.028 }, 00:23:11.028 "secure_channel": true 00:23:11.028 } 00:23:11.028 } 00:23:11.028 ] 00:23:11.028 } 00:23:11.028 ] 00:23:11.028 }' 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=362943 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 362943 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 362943 ']' 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.028 16:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.028 [2024-07-15 16:21:53.980925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:11.028 [2024-07-15 16:21:53.981004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.287 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.287 [2024-07-15 16:21:54.049978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.287 [2024-07-15 16:21:54.140407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.287 [2024-07-15 16:21:54.140472] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.287 [2024-07-15 16:21:54.140489] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.287 [2024-07-15 16:21:54.140503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.287 [2024-07-15 16:21:54.140516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.287 [2024-07-15 16:21:54.140611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.547 [2024-07-15 16:21:54.379627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.547 [2024-07-15 16:21:54.395530] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.547 [2024-07-15 16:21:54.411588] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.547 [2024-07-15 16:21:54.421942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=363003 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 363003 /var/tmp/bdevperf.sock 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 363003 ']' 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.114 16:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:12.114 "subsystems": [ 00:23:12.114 { 00:23:12.114 "subsystem": "keyring", 00:23:12.114 "config": [] 00:23:12.114 }, 00:23:12.114 { 00:23:12.114 "subsystem": "iobuf", 00:23:12.114 "config": [ 00:23:12.114 { 00:23:12.114 "method": "iobuf_set_options", 00:23:12.114 "params": { 00:23:12.114 "small_pool_count": 8192, 00:23:12.114 "large_pool_count": 1024, 00:23:12.114 "small_bufsize": 8192, 00:23:12.114 "large_bufsize": 135168 00:23:12.114 } 00:23:12.114 } 00:23:12.114 ] 00:23:12.114 }, 00:23:12.114 { 00:23:12.114 "subsystem": "sock", 00:23:12.114 "config": [ 00:23:12.114 { 00:23:12.114 "method": "sock_set_default_impl", 00:23:12.114 "params": { 00:23:12.114 "impl_name": "posix" 00:23:12.114 } 00:23:12.114 }, 00:23:12.114 { 00:23:12.114 "method": "sock_impl_set_options", 00:23:12.114 "params": { 00:23:12.114 "impl_name": "ssl", 00:23:12.114 "recv_buf_size": 4096, 00:23:12.114 "send_buf_size": 4096, 00:23:12.115 "enable_recv_pipe": true, 00:23:12.115 "enable_quickack": false, 00:23:12.115 "enable_placement_id": 0, 00:23:12.115 "enable_zerocopy_send_server": true, 00:23:12.115 "enable_zerocopy_send_client": false, 00:23:12.115 "zerocopy_threshold": 0, 00:23:12.115 "tls_version": 0, 00:23:12.115 "enable_ktls": false 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "sock_impl_set_options", 00:23:12.115 "params": { 00:23:12.115 "impl_name": "posix", 00:23:12.115 "recv_buf_size": 2097152, 00:23:12.115 "send_buf_size": 2097152, 00:23:12.115 "enable_recv_pipe": true, 00:23:12.115 "enable_quickack": false, 00:23:12.115 "enable_placement_id": 0, 00:23:12.115 "enable_zerocopy_send_server": true, 00:23:12.115 "enable_zerocopy_send_client": false, 00:23:12.115 "zerocopy_threshold": 0, 00:23:12.115 "tls_version": 0, 00:23:12.115 "enable_ktls": false 00:23:12.115 } 00:23:12.115 } 00:23:12.115 ] 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "subsystem": "vmd", 00:23:12.115 "config": [] 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "subsystem": "accel", 00:23:12.115 "config": [ 00:23:12.115 { 00:23:12.115 "method": "accel_set_options", 00:23:12.115 "params": { 00:23:12.115 "small_cache_size": 128, 00:23:12.115 "large_cache_size": 16, 00:23:12.115 "task_count": 2048, 00:23:12.115 "sequence_count": 2048, 00:23:12.115 "buf_count": 2048 00:23:12.115 } 00:23:12.115 } 00:23:12.115 ] 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "subsystem": "bdev", 00:23:12.115 "config": [ 00:23:12.115 { 00:23:12.115 "method": "bdev_set_options", 00:23:12.115 "params": { 00:23:12.115 "bdev_io_pool_size": 65535, 00:23:12.115 "bdev_io_cache_size": 256, 00:23:12.115 "bdev_auto_examine": true, 00:23:12.115 "iobuf_small_cache_size": 128, 00:23:12.115 "iobuf_large_cache_size": 16 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_raid_set_options", 00:23:12.115 "params": { 00:23:12.115 "process_window_size_kb": 1024 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_iscsi_set_options", 00:23:12.115 "params": { 00:23:12.115 "timeout_sec": 30 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_nvme_set_options", 00:23:12.115 "params": { 00:23:12.115 "action_on_timeout": "none", 00:23:12.115 "timeout_us": 0, 00:23:12.115 "timeout_admin_us": 0, 00:23:12.115 "keep_alive_timeout_ms": 10000, 00:23:12.115 "arbitration_burst": 0, 00:23:12.115 "low_priority_weight": 0, 00:23:12.115 "medium_priority_weight": 0, 00:23:12.115 "high_priority_weight": 0, 00:23:12.115 "nvme_adminq_poll_period_us": 10000, 00:23:12.115 "nvme_ioq_poll_period_us": 0, 00:23:12.115 "io_queue_requests": 512, 00:23:12.115 "delay_cmd_submit": true, 00:23:12.115 "transport_retry_count": 4, 00:23:12.115 "bdev_retry_count": 3, 00:23:12.115 "transport_ack_timeout": 0, 00:23:12.115 "ctrlr_loss_timeout_sec": 0, 00:23:12.115 "reconnect_delay_sec": 0, 00:23:12.115 "fast_io_fail_timeout_sec": 0, 00:23:12.115 "disable_auto_failback": false, 00:23:12.115 "generate_uuids": false, 00:23:12.115 "transport_tos": 0, 00:23:12.115 "nvme_error_stat": false, 00:23:12.115 "rdma_srq_size": 0, 00:23:12.115 "io_path_stat": false, 00:23:12.115 "allow_accel_sequence": false, 00:23:12.115 "rdma_max_cq_size": 0, 00:23:12.115 "rdma_cm_event_timeout_ms": 0, 00:23:12.115 "dhchap_digests": [ 00:23:12.115 "sha256", 00:23:12.115 "sha384", 00:23:12.115 "sha512" 00:23:12.115 ], 00:23:12.115 "dhchap_dhgroups": [ 00:23:12.115 "null", 00:23:12.115 "ffdhe2048", 00:23:12.115 "ffdhe3072", 00:23:12.115 "ffdhe4096", 00:23:12.115 "ffdhe6144", 00:23:12.115 "ffdhe8192" 00:23:12.115 ] 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_nvme_attach_controller", 00:23:12.115 "params": { 00:23:12.115 "name": "TLSTEST", 00:23:12.115 "trtype": "TCP", 00:23:12.115 "adrfam": "IPv4", 00:23:12.115 "traddr": "10.0.0.2", 00:23:12.115 "trsvcid": "4420", 00:23:12.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.115 "prchk_reftag": false, 00:23:12.115 "prchk_guard": false, 00:23:12.115 "ctrlr_loss_timeout_sec": 0, 00:23:12.115 "reconnect_delay_sec": 0, 00:23:12.115 "fast_io_fail_timeout_sec": 0, 00:23:12.115 "psk": "/tmp/tmp.UTwCVTPOr7", 00:23:12.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.115 "hdgst": false, 00:23:12.115 "ddgst": false 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_nvme_set_hotplug", 00:23:12.115 "params": { 00:23:12.115 "period_us": 100000, 00:23:12.115 "enable": false 00:23:12.115 } 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "method": "bdev_wait_for_examine" 00:23:12.115 } 00:23:12.115 ] 00:23:12.115 }, 00:23:12.115 { 00:23:12.115 "subsystem": "nbd", 00:23:12.115 "config": [] 00:23:12.115 } 00:23:12.115 ] 00:23:12.115 }' 00:23:12.115 [2024-07-15 16:21:55.016786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:12.115 [2024-07-15 16:21:55.016883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363003 ] 00:23:12.115 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.115 [2024-07-15 16:21:55.084216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.375 [2024-07-15 16:21:55.176725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.375 [2024-07-15 16:21:55.347929] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.375 [2024-07-15 16:21:55.348101] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:13.311 16:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.311 16:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:13.311 16:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:13.311 Running I/O for 10 seconds... 00:23:23.276 00:23:23.276 Latency(us) 00:23:23.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.276 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:23.276 Verification LBA range: start 0x0 length 0x2000 00:23:23.276 TLSTESTn1 : 10.02 3654.99 14.28 0.00 0.00 34956.01 9223.59 40389.59 00:23:23.276 =================================================================================================================== 00:23:23.276 Total : 3654.99 14.28 0.00 0.00 34956.01 9223.59 40389.59 00:23:23.276 0 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 363003 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 363003 ']' 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 363003 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 363003 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 363003' 00:23:23.276 killing process with pid 363003 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 363003 00:23:23.276 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.276 00:23:23.276 Latency(us) 00:23:23.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.276 =================================================================================================================== 00:23:23.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.276 [2024-07-15 16:22:06.223930] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:23.276 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 363003 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 362943 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 362943 ']' 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 362943 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 362943 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 362943' 00:23:23.535 killing process with pid 362943 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 362943 00:23:23.535 [2024-07-15 16:22:06.476206] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.535 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 362943 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=364426 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 364426 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 364426 ']' 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.793 16:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.052 [2024-07-15 16:22:06.776939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:24.052 [2024-07-15 16:22:06.777027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.052 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.052 [2024-07-15 16:22:06.845762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.052 [2024-07-15 16:22:06.933565] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.052 [2024-07-15 16:22:06.933631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.052 [2024-07-15 16:22:06.933645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.052 [2024-07-15 16:22:06.933656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.052 [2024-07-15 16:22:06.933666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.052 [2024-07-15 16:22:06.933693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UTwCVTPOr7 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UTwCVTPOr7 00:23:24.310 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.567 [2024-07-15 16:22:07.344123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.567 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.825 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.083 [2024-07-15 16:22:07.845465] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.083 [2024-07-15 16:22:07.845692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.083 16:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.340 malloc0 00:23:25.340 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.599 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UTwCVTPOr7 00:23:25.859 [2024-07-15 16:22:08.582083] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=364591 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 364591 /var/tmp/bdevperf.sock 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 364591 ']' 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:25.859 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.859 [2024-07-15 16:22:08.642264] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:25.859 [2024-07-15 16:22:08.642351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364591 ] 00:23:25.859 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.859 [2024-07-15 16:22:08.710171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.859 [2024-07-15 16:22:08.803246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.117 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:26.117 16:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:26.117 16:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UTwCVTPOr7 00:23:26.373 16:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:26.631 [2024-07-15 16:22:09.413082] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.631 nvme0n1 00:23:26.631 16:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.631 Running I/O for 1 seconds... 00:23:28.006 00:23:28.006 Latency(us) 00:23:28.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.006 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:28.006 Verification LBA range: start 0x0 length 0x2000 00:23:28.006 nvme0n1 : 1.04 3309.75 12.93 0.00 0.00 38035.79 9223.59 37088.52 00:23:28.006 =================================================================================================================== 00:23:28.006 Total : 3309.75 12.93 0.00 0.00 38035.79 9223.59 37088.52 00:23:28.006 0 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 364591 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 364591 ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 364591 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 364591 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 364591' 00:23:28.006 killing process with pid 364591 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 364591 00:23:28.006 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.006 00:23:28.006 Latency(us) 00:23:28.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.006 =================================================================================================================== 00:23:28.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 364591 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 364426 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 364426 ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 364426 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 364426 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 364426' 00:23:28.006 killing process with pid 364426 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 364426 00:23:28.006 [2024-07-15 16:22:10.932894] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:28.006 16:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 364426 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=364986 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 364986 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 364986 ']' 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.265 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.265 [2024-07-15 16:22:11.237205] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.265 [2024-07-15 16:22:11.237295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.523 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.523 [2024-07-15 16:22:11.300750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.523 [2024-07-15 16:22:11.386453] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.523 [2024-07-15 16:22:11.386506] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.523 [2024-07-15 16:22:11.386536] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.523 [2024-07-15 16:22:11.386546] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.523 [2024-07-15 16:22:11.386556] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.523 [2024-07-15 16:22:11.386582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.523 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:28.523 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:28.523 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.523 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.523 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.781 16:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.782 [2024-07-15 16:22:11.527788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.782 malloc0 00:23:28.782 [2024-07-15 16:22:11.559702] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.782 [2024-07-15 16:22:11.559968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=365017 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 365017 /var/tmp/bdevperf.sock 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 365017 ']' 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.782 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.782 [2024-07-15 16:22:11.630412] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:28.782 [2024-07-15 16:22:11.630496] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365017 ] 00:23:28.782 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.782 [2024-07-15 16:22:11.691007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.039 [2024-07-15 16:22:11.778104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.039 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.039 16:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:29.039 16:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UTwCVTPOr7 00:23:29.298 16:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.556 [2024-07-15 16:22:12.436923] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.556 nvme0n1 00:23:29.556 16:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.815 Running I/O for 1 seconds... 00:23:30.750 00:23:30.750 Latency(us) 00:23:30.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.750 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:30.750 Verification LBA range: start 0x0 length 0x2000 00:23:30.750 nvme0n1 : 1.03 3057.87 11.94 0.00 0.00 41352.77 9514.86 61749.48 00:23:30.750 =================================================================================================================== 00:23:30.750 Total : 3057.87 11.94 0.00 0.00 41352.77 9514.86 61749.48 00:23:30.750 0 00:23:30.750 16:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:30.750 16:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.750 16:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.007 16:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.007 16:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:31.007 "subsystems": [ 00:23:31.007 { 00:23:31.007 "subsystem": "keyring", 00:23:31.007 "config": [ 00:23:31.007 { 00:23:31.007 "method": "keyring_file_add_key", 00:23:31.007 "params": { 00:23:31.007 "name": "key0", 00:23:31.007 "path": "/tmp/tmp.UTwCVTPOr7" 00:23:31.007 } 00:23:31.007 } 00:23:31.007 ] 00:23:31.007 }, 00:23:31.007 { 00:23:31.007 "subsystem": "iobuf", 00:23:31.007 "config": [ 00:23:31.007 { 00:23:31.007 "method": "iobuf_set_options", 00:23:31.007 "params": { 00:23:31.007 "small_pool_count": 8192, 00:23:31.007 "large_pool_count": 1024, 00:23:31.007 "small_bufsize": 8192, 00:23:31.007 "large_bufsize": 135168 00:23:31.007 } 00:23:31.007 } 00:23:31.007 ] 00:23:31.007 }, 00:23:31.007 { 00:23:31.007 "subsystem": "sock", 00:23:31.007 "config": [ 00:23:31.007 { 00:23:31.007 "method": "sock_set_default_impl", 00:23:31.007 "params": { 00:23:31.007 "impl_name": "posix" 00:23:31.007 } 00:23:31.007 }, 00:23:31.007 { 00:23:31.007 "method": "sock_impl_set_options", 00:23:31.007 "params": { 00:23:31.007 "impl_name": "ssl", 00:23:31.007 "recv_buf_size": 4096, 00:23:31.007 "send_buf_size": 4096, 00:23:31.007 "enable_recv_pipe": true, 00:23:31.007 "enable_quickack": false, 00:23:31.007 "enable_placement_id": 0, 00:23:31.007 "enable_zerocopy_send_server": true, 00:23:31.007 "enable_zerocopy_send_client": false, 00:23:31.007 "zerocopy_threshold": 0, 00:23:31.007 "tls_version": 0, 00:23:31.007 "enable_ktls": false 00:23:31.007 } 00:23:31.007 }, 00:23:31.007 { 00:23:31.007 "method": "sock_impl_set_options", 00:23:31.007 "params": { 00:23:31.007 "impl_name": "posix", 00:23:31.008 "recv_buf_size": 2097152, 00:23:31.008 "send_buf_size": 2097152, 00:23:31.008 "enable_recv_pipe": true, 00:23:31.008 "enable_quickack": false, 00:23:31.008 "enable_placement_id": 0, 00:23:31.008 "enable_zerocopy_send_server": true, 00:23:31.008 "enable_zerocopy_send_client": false, 00:23:31.008 "zerocopy_threshold": 0, 00:23:31.008 "tls_version": 0, 00:23:31.008 "enable_ktls": false 00:23:31.008 } 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "vmd", 00:23:31.008 "config": [] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "accel", 00:23:31.008 "config": [ 00:23:31.008 { 00:23:31.008 "method": "accel_set_options", 00:23:31.008 "params": { 00:23:31.008 "small_cache_size": 128, 00:23:31.008 "large_cache_size": 16, 00:23:31.008 "task_count": 2048, 00:23:31.008 "sequence_count": 2048, 00:23:31.008 "buf_count": 2048 00:23:31.008 } 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "bdev", 00:23:31.008 "config": [ 00:23:31.008 { 00:23:31.008 "method": "bdev_set_options", 00:23:31.008 "params": { 00:23:31.008 "bdev_io_pool_size": 65535, 00:23:31.008 "bdev_io_cache_size": 256, 00:23:31.008 "bdev_auto_examine": true, 00:23:31.008 "iobuf_small_cache_size": 128, 00:23:31.008 "iobuf_large_cache_size": 16 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_raid_set_options", 00:23:31.008 "params": { 00:23:31.008 "process_window_size_kb": 1024 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_iscsi_set_options", 00:23:31.008 "params": { 00:23:31.008 "timeout_sec": 30 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_nvme_set_options", 00:23:31.008 "params": { 00:23:31.008 "action_on_timeout": "none", 00:23:31.008 "timeout_us": 0, 00:23:31.008 "timeout_admin_us": 0, 00:23:31.008 "keep_alive_timeout_ms": 10000, 00:23:31.008 "arbitration_burst": 0, 00:23:31.008 "low_priority_weight": 0, 00:23:31.008 "medium_priority_weight": 0, 00:23:31.008 "high_priority_weight": 0, 00:23:31.008 "nvme_adminq_poll_period_us": 10000, 00:23:31.008 "nvme_ioq_poll_period_us": 0, 00:23:31.008 "io_queue_requests": 0, 00:23:31.008 "delay_cmd_submit": true, 00:23:31.008 "transport_retry_count": 4, 00:23:31.008 "bdev_retry_count": 3, 00:23:31.008 "transport_ack_timeout": 0, 00:23:31.008 "ctrlr_loss_timeout_sec": 0, 00:23:31.008 "reconnect_delay_sec": 0, 00:23:31.008 "fast_io_fail_timeout_sec": 0, 00:23:31.008 "disable_auto_failback": false, 00:23:31.008 "generate_uuids": false, 00:23:31.008 "transport_tos": 0, 00:23:31.008 "nvme_error_stat": false, 00:23:31.008 "rdma_srq_size": 0, 00:23:31.008 "io_path_stat": false, 00:23:31.008 "allow_accel_sequence": false, 00:23:31.008 "rdma_max_cq_size": 0, 00:23:31.008 "rdma_cm_event_timeout_ms": 0, 00:23:31.008 "dhchap_digests": [ 00:23:31.008 "sha256", 00:23:31.008 "sha384", 00:23:31.008 "sha512" 00:23:31.008 ], 00:23:31.008 "dhchap_dhgroups": [ 00:23:31.008 "null", 00:23:31.008 "ffdhe2048", 00:23:31.008 "ffdhe3072", 00:23:31.008 "ffdhe4096", 00:23:31.008 "ffdhe6144", 00:23:31.008 "ffdhe8192" 00:23:31.008 ] 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_nvme_set_hotplug", 00:23:31.008 "params": { 00:23:31.008 "period_us": 100000, 00:23:31.008 "enable": false 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_malloc_create", 00:23:31.008 "params": { 00:23:31.008 "name": "malloc0", 00:23:31.008 "num_blocks": 8192, 00:23:31.008 "block_size": 4096, 00:23:31.008 "physical_block_size": 4096, 00:23:31.008 "uuid": "9d6b7188-0694-49d4-a1a1-af66c5f0d2ed", 00:23:31.008 "optimal_io_boundary": 0 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "bdev_wait_for_examine" 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "nbd", 00:23:31.008 "config": [] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "scheduler", 00:23:31.008 "config": [ 00:23:31.008 { 00:23:31.008 "method": "framework_set_scheduler", 00:23:31.008 "params": { 00:23:31.008 "name": "static" 00:23:31.008 } 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "subsystem": "nvmf", 00:23:31.008 "config": [ 00:23:31.008 { 00:23:31.008 "method": "nvmf_set_config", 00:23:31.008 "params": { 00:23:31.008 "discovery_filter": "match_any", 00:23:31.008 "admin_cmd_passthru": { 00:23:31.008 "identify_ctrlr": false 00:23:31.008 } 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_set_max_subsystems", 00:23:31.008 "params": { 00:23:31.008 "max_subsystems": 1024 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_set_crdt", 00:23:31.008 "params": { 00:23:31.008 "crdt1": 0, 00:23:31.008 "crdt2": 0, 00:23:31.008 "crdt3": 0 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_create_transport", 00:23:31.008 "params": { 00:23:31.008 "trtype": "TCP", 00:23:31.008 "max_queue_depth": 128, 00:23:31.008 "max_io_qpairs_per_ctrlr": 127, 00:23:31.008 "in_capsule_data_size": 4096, 00:23:31.008 "max_io_size": 131072, 00:23:31.008 "io_unit_size": 131072, 00:23:31.008 "max_aq_depth": 128, 00:23:31.008 "num_shared_buffers": 511, 00:23:31.008 "buf_cache_size": 4294967295, 00:23:31.008 "dif_insert_or_strip": false, 00:23:31.008 "zcopy": false, 00:23:31.008 "c2h_success": false, 00:23:31.008 "sock_priority": 0, 00:23:31.008 "abort_timeout_sec": 1, 00:23:31.008 "ack_timeout": 0, 00:23:31.008 "data_wr_pool_size": 0 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_create_subsystem", 00:23:31.008 "params": { 00:23:31.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.008 "allow_any_host": false, 00:23:31.008 "serial_number": "00000000000000000000", 00:23:31.008 "model_number": "SPDK bdev Controller", 00:23:31.008 "max_namespaces": 32, 00:23:31.008 "min_cntlid": 1, 00:23:31.008 "max_cntlid": 65519, 00:23:31.008 "ana_reporting": false 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_subsystem_add_host", 00:23:31.008 "params": { 00:23:31.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.008 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.008 "psk": "key0" 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_subsystem_add_ns", 00:23:31.008 "params": { 00:23:31.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.008 "namespace": { 00:23:31.008 "nsid": 1, 00:23:31.008 "bdev_name": "malloc0", 00:23:31.008 "nguid": "9D6B7188069449D4A1A1AF66C5F0D2ED", 00:23:31.008 "uuid": "9d6b7188-0694-49d4-a1a1-af66c5f0d2ed", 00:23:31.008 "no_auto_visible": false 00:23:31.008 } 00:23:31.008 } 00:23:31.008 }, 00:23:31.008 { 00:23:31.008 "method": "nvmf_subsystem_add_listener", 00:23:31.008 "params": { 00:23:31.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.008 "listen_address": { 00:23:31.008 "trtype": "TCP", 00:23:31.008 "adrfam": "IPv4", 00:23:31.008 "traddr": "10.0.0.2", 00:23:31.008 "trsvcid": "4420" 00:23:31.008 }, 00:23:31.008 "secure_channel": true 00:23:31.008 } 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 } 00:23:31.008 ] 00:23:31.008 }' 00:23:31.008 16:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:31.268 16:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:31.268 "subsystems": [ 00:23:31.268 { 00:23:31.268 "subsystem": "keyring", 00:23:31.268 "config": [ 00:23:31.268 { 00:23:31.268 "method": "keyring_file_add_key", 00:23:31.268 "params": { 00:23:31.268 "name": "key0", 00:23:31.268 "path": "/tmp/tmp.UTwCVTPOr7" 00:23:31.268 } 00:23:31.268 } 00:23:31.268 ] 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "subsystem": "iobuf", 00:23:31.268 "config": [ 00:23:31.268 { 00:23:31.268 "method": "iobuf_set_options", 00:23:31.268 "params": { 00:23:31.268 "small_pool_count": 8192, 00:23:31.268 "large_pool_count": 1024, 00:23:31.268 "small_bufsize": 8192, 00:23:31.268 "large_bufsize": 135168 00:23:31.268 } 00:23:31.268 } 00:23:31.268 ] 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "subsystem": "sock", 00:23:31.268 "config": [ 00:23:31.268 { 00:23:31.268 "method": "sock_set_default_impl", 00:23:31.268 "params": { 00:23:31.268 "impl_name": "posix" 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "sock_impl_set_options", 00:23:31.268 "params": { 00:23:31.268 "impl_name": "ssl", 00:23:31.268 "recv_buf_size": 4096, 00:23:31.268 "send_buf_size": 4096, 00:23:31.268 "enable_recv_pipe": true, 00:23:31.268 "enable_quickack": false, 00:23:31.268 "enable_placement_id": 0, 00:23:31.268 "enable_zerocopy_send_server": true, 00:23:31.268 "enable_zerocopy_send_client": false, 00:23:31.268 "zerocopy_threshold": 0, 00:23:31.268 "tls_version": 0, 00:23:31.268 "enable_ktls": false 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "sock_impl_set_options", 00:23:31.268 "params": { 00:23:31.268 "impl_name": "posix", 00:23:31.268 "recv_buf_size": 2097152, 00:23:31.268 "send_buf_size": 2097152, 00:23:31.268 "enable_recv_pipe": true, 00:23:31.268 "enable_quickack": false, 00:23:31.268 "enable_placement_id": 0, 00:23:31.268 "enable_zerocopy_send_server": true, 00:23:31.268 "enable_zerocopy_send_client": false, 00:23:31.268 "zerocopy_threshold": 0, 00:23:31.268 "tls_version": 0, 00:23:31.268 "enable_ktls": false 00:23:31.268 } 00:23:31.268 } 00:23:31.268 ] 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "subsystem": "vmd", 00:23:31.268 "config": [] 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "subsystem": "accel", 00:23:31.268 "config": [ 00:23:31.268 { 00:23:31.268 "method": "accel_set_options", 00:23:31.268 "params": { 00:23:31.268 "small_cache_size": 128, 00:23:31.268 "large_cache_size": 16, 00:23:31.268 "task_count": 2048, 00:23:31.268 "sequence_count": 2048, 00:23:31.268 "buf_count": 2048 00:23:31.268 } 00:23:31.268 } 00:23:31.268 ] 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "subsystem": "bdev", 00:23:31.268 "config": [ 00:23:31.268 { 00:23:31.268 "method": "bdev_set_options", 00:23:31.268 "params": { 00:23:31.268 "bdev_io_pool_size": 65535, 00:23:31.268 "bdev_io_cache_size": 256, 00:23:31.268 "bdev_auto_examine": true, 00:23:31.268 "iobuf_small_cache_size": 128, 00:23:31.268 "iobuf_large_cache_size": 16 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_raid_set_options", 00:23:31.268 "params": { 00:23:31.268 "process_window_size_kb": 1024 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_iscsi_set_options", 00:23:31.268 "params": { 00:23:31.268 "timeout_sec": 30 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_nvme_set_options", 00:23:31.268 "params": { 00:23:31.268 "action_on_timeout": "none", 00:23:31.268 "timeout_us": 0, 00:23:31.268 "timeout_admin_us": 0, 00:23:31.268 "keep_alive_timeout_ms": 10000, 00:23:31.268 "arbitration_burst": 0, 00:23:31.268 "low_priority_weight": 0, 00:23:31.268 "medium_priority_weight": 0, 00:23:31.268 "high_priority_weight": 0, 00:23:31.268 "nvme_adminq_poll_period_us": 10000, 00:23:31.268 "nvme_ioq_poll_period_us": 0, 00:23:31.268 "io_queue_requests": 512, 00:23:31.268 "delay_cmd_submit": true, 00:23:31.268 "transport_retry_count": 4, 00:23:31.268 "bdev_retry_count": 3, 00:23:31.268 "transport_ack_timeout": 0, 00:23:31.268 "ctrlr_loss_timeout_sec": 0, 00:23:31.268 "reconnect_delay_sec": 0, 00:23:31.268 "fast_io_fail_timeout_sec": 0, 00:23:31.268 "disable_auto_failback": false, 00:23:31.268 "generate_uuids": false, 00:23:31.268 "transport_tos": 0, 00:23:31.268 "nvme_error_stat": false, 00:23:31.268 "rdma_srq_size": 0, 00:23:31.268 "io_path_stat": false, 00:23:31.268 "allow_accel_sequence": false, 00:23:31.268 "rdma_max_cq_size": 0, 00:23:31.268 "rdma_cm_event_timeout_ms": 0, 00:23:31.268 "dhchap_digests": [ 00:23:31.268 "sha256", 00:23:31.268 "sha384", 00:23:31.268 "sha512" 00:23:31.268 ], 00:23:31.268 "dhchap_dhgroups": [ 00:23:31.268 "null", 00:23:31.268 "ffdhe2048", 00:23:31.268 "ffdhe3072", 00:23:31.268 "ffdhe4096", 00:23:31.268 "ffdhe6144", 00:23:31.268 "ffdhe8192" 00:23:31.268 ] 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_nvme_attach_controller", 00:23:31.268 "params": { 00:23:31.268 "name": "nvme0", 00:23:31.268 "trtype": "TCP", 00:23:31.268 "adrfam": "IPv4", 00:23:31.268 "traddr": "10.0.0.2", 00:23:31.268 "trsvcid": "4420", 00:23:31.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.268 "prchk_reftag": false, 00:23:31.268 "prchk_guard": false, 00:23:31.268 "ctrlr_loss_timeout_sec": 0, 00:23:31.268 "reconnect_delay_sec": 0, 00:23:31.268 "fast_io_fail_timeout_sec": 0, 00:23:31.268 "psk": "key0", 00:23:31.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.268 "hdgst": false, 00:23:31.268 "ddgst": false 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_nvme_set_hotplug", 00:23:31.268 "params": { 00:23:31.268 "period_us": 100000, 00:23:31.268 "enable": false 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.268 "method": "bdev_enable_histogram", 00:23:31.268 "params": { 00:23:31.268 "name": "nvme0n1", 00:23:31.268 "enable": true 00:23:31.268 } 00:23:31.268 }, 00:23:31.268 { 00:23:31.269 "method": "bdev_wait_for_examine" 00:23:31.269 } 00:23:31.269 ] 00:23:31.269 }, 00:23:31.269 { 00:23:31.269 "subsystem": "nbd", 00:23:31.269 "config": [] 00:23:31.269 } 00:23:31.269 ] 00:23:31.269 }' 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 365017 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 365017 ']' 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 365017 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 365017 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 365017' 00:23:31.269 killing process with pid 365017 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 365017 00:23:31.269 Received shutdown signal, test time was about 1.000000 seconds 00:23:31.269 00:23:31.269 Latency(us) 00:23:31.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.269 =================================================================================================================== 00:23:31.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.269 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 365017 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 364986 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 364986 ']' 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 364986 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 364986 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 364986' 00:23:31.529 killing process with pid 364986 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 364986 00:23:31.529 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 364986 00:23:31.788 16:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:31.788 16:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.788 16:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:31.788 "subsystems": [ 00:23:31.788 { 00:23:31.788 "subsystem": "keyring", 00:23:31.788 "config": [ 00:23:31.788 { 00:23:31.788 "method": "keyring_file_add_key", 00:23:31.788 "params": { 00:23:31.788 "name": "key0", 00:23:31.788 "path": "/tmp/tmp.UTwCVTPOr7" 00:23:31.788 } 00:23:31.788 } 00:23:31.788 ] 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "subsystem": "iobuf", 00:23:31.788 "config": [ 00:23:31.788 { 00:23:31.788 "method": "iobuf_set_options", 00:23:31.788 "params": { 00:23:31.788 "small_pool_count": 8192, 00:23:31.788 "large_pool_count": 1024, 00:23:31.788 "small_bufsize": 8192, 00:23:31.788 "large_bufsize": 135168 00:23:31.788 } 00:23:31.788 } 00:23:31.788 ] 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "subsystem": "sock", 00:23:31.788 "config": [ 00:23:31.788 { 00:23:31.788 "method": "sock_set_default_impl", 00:23:31.788 "params": { 00:23:31.788 "impl_name": "posix" 00:23:31.788 } 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "method": "sock_impl_set_options", 00:23:31.788 "params": { 00:23:31.788 "impl_name": "ssl", 00:23:31.788 "recv_buf_size": 4096, 00:23:31.788 "send_buf_size": 4096, 00:23:31.788 "enable_recv_pipe": true, 00:23:31.788 "enable_quickack": false, 00:23:31.788 "enable_placement_id": 0, 00:23:31.788 "enable_zerocopy_send_server": true, 00:23:31.788 "enable_zerocopy_send_client": false, 00:23:31.788 "zerocopy_threshold": 0, 00:23:31.788 "tls_version": 0, 00:23:31.788 "enable_ktls": false 00:23:31.788 } 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "method": "sock_impl_set_options", 00:23:31.788 "params": { 00:23:31.788 "impl_name": "posix", 00:23:31.788 "recv_buf_size": 2097152, 00:23:31.788 "send_buf_size": 2097152, 00:23:31.788 "enable_recv_pipe": true, 00:23:31.788 "enable_quickack": false, 00:23:31.788 "enable_placement_id": 0, 00:23:31.788 "enable_zerocopy_send_server": true, 00:23:31.788 "enable_zerocopy_send_client": false, 00:23:31.788 "zerocopy_threshold": 0, 00:23:31.788 "tls_version": 0, 00:23:31.788 "enable_ktls": false 00:23:31.788 } 00:23:31.788 } 00:23:31.788 ] 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "subsystem": "vmd", 00:23:31.788 "config": [] 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "subsystem": "accel", 00:23:31.788 "config": [ 00:23:31.788 { 00:23:31.788 "method": "accel_set_options", 00:23:31.788 "params": { 00:23:31.788 "small_cache_size": 128, 00:23:31.788 "large_cache_size": 16, 00:23:31.788 "task_count": 2048, 00:23:31.788 "sequence_count": 2048, 00:23:31.788 "buf_count": 2048 00:23:31.788 } 00:23:31.788 } 00:23:31.788 ] 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "subsystem": "bdev", 00:23:31.788 "config": [ 00:23:31.788 { 00:23:31.788 "method": "bdev_set_options", 00:23:31.788 "params": { 00:23:31.788 "bdev_io_pool_size": 65535, 00:23:31.788 "bdev_io_cache_size": 256, 00:23:31.788 "bdev_auto_examine": true, 00:23:31.788 "iobuf_small_cache_size": 128, 00:23:31.788 "iobuf_large_cache_size": 16 00:23:31.788 } 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "method": "bdev_raid_set_options", 00:23:31.788 "params": { 00:23:31.788 "process_window_size_kb": 1024 00:23:31.788 } 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "method": "bdev_iscsi_set_options", 00:23:31.788 "params": { 00:23:31.788 "timeout_sec": 30 00:23:31.788 } 00:23:31.788 }, 00:23:31.788 { 00:23:31.788 "method": "bdev_nvme_set_options", 00:23:31.788 "params": { 00:23:31.788 "action_on_timeout": "none", 00:23:31.788 "timeout_us": 0, 00:23:31.788 "timeout_admin_us": 0, 00:23:31.788 "keep_alive_timeout_ms": 10000, 00:23:31.788 "arbitration_burst": 0, 00:23:31.788 "low_priority_weight": 0, 00:23:31.788 "medium_priority_weight": 0, 00:23:31.788 "high_priority_weight": 0, 00:23:31.788 "nvme_adminq_poll_period_us": 10000, 00:23:31.788 "nvme_ioq_poll_period_us": 0, 00:23:31.789 "io_queue_requests": 0, 00:23:31.789 "delay_cmd_submit": true, 00:23:31.789 "transport_retry_count": 4, 00:23:31.789 "bdev_retry_count": 3, 00:23:31.789 "transport_ack_timeout": 0, 00:23:31.789 "ctrlr_loss_timeout_sec": 0, 00:23:31.789 "reconnect_delay_sec": 0, 00:23:31.789 "fast_io_fail_timeout_sec": 0, 00:23:31.789 "disable_auto_failback": false, 00:23:31.789 "generate_uuids": false, 00:23:31.789 "transport_tos": 0, 00:23:31.789 "nvme_error_stat": false, 00:23:31.789 "rdma_srq_size": 0, 00:23:31.789 "io_path_stat": false, 00:23:31.789 "allow_accel_sequence": false, 00:23:31.789 "rdma_max_cq_size": 0, 00:23:31.789 "rdma_cm_event_timeout_ms": 0, 00:23:31.789 "dhchap_digests": [ 00:23:31.789 "sha256", 00:23:31.789 "sha384", 00:23:31.789 "sha512" 00:23:31.789 ], 00:23:31.789 "dhchap_dhgroups": [ 00:23:31.789 "null", 00:23:31.789 "ffdhe2048", 00:23:31.789 "ffdhe3072", 00:23:31.789 "ffdhe4096", 00:23:31.789 "ffdhe6144", 00:23:31.789 "ffdhe8192" 00:23:31.789 ] 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "bdev_nvme_set_hotplug", 00:23:31.789 "params": { 00:23:31.789 "period_us": 100000, 00:23:31.789 "enable": false 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "bdev_malloc_create", 00:23:31.789 "params": { 00:23:31.789 "name": "malloc0", 00:23:31.789 "num_blocks": 8192, 00:23:31.789 "block_size": 4096, 00:23:31.789 "physical_block_size": 4096, 00:23:31.789 "uuid": "9d6b7188-0694-49d4-a1a1-af66c5f0d2ed", 00:23:31.789 "optimal_io_boundary": 0 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "bdev_wait_for_examine" 00:23:31.789 } 00:23:31.789 ] 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "subsystem": "nbd", 00:23:31.789 "config": [] 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "subsystem": "scheduler", 00:23:31.789 "config": [ 00:23:31.789 { 00:23:31.789 "method": "framework_set_scheduler", 00:23:31.789 "params": { 00:23:31.789 "name": "static" 00:23:31.789 } 00:23:31.789 } 00:23:31.789 ] 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "subsystem": "nvmf", 00:23:31.789 "config": [ 00:23:31.789 { 00:23:31.789 "method": "nvmf_set_config", 00:23:31.789 "params": { 00:23:31.789 "discovery_filter": "match_any", 00:23:31.789 "admin_cmd_passthru": { 00:23:31.789 "identify_ctrlr": false 00:23:31.789 } 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_set_max_subsystems", 00:23:31.789 "params": { 00:23:31.789 "max_subsystems": 1024 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_set_crdt", 00:23:31.789 "params": { 00:23:31.789 "crdt1": 0, 00:23:31.789 "crdt2": 0, 00:23:31.789 "crdt3": 0 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_create_transport", 00:23:31.789 "params": { 00:23:31.789 "trtype": "TCP", 00:23:31.789 "max_queue_depth": 128, 00:23:31.789 "max_io_qpairs_per_ctrlr": 127, 00:23:31.789 "in_capsule_data_size": 4096, 00:23:31.789 "max_io_size": 131072, 00:23:31.789 "io_unit_size": 131072, 00:23:31.789 "max_aq_depth": 128, 00:23:31.789 "num_shared_buffers": 511, 00:23:31.789 "buf_cache_size": 4294967295, 00:23:31.789 "dif_insert_or_strip": false, 00:23:31.789 "zcopy": false, 00:23:31.789 "c2h_success": false, 00:23:31.789 "sock_priority": 0, 00:23:31.789 "abort_timeout_sec": 1, 00:23:31.789 "ack_timeout": 0, 00:23:31.789 "data_wr_pool_size": 0 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_create_subsystem", 00:23:31.789 "params": { 00:23:31.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.789 "allow_any_host": false, 00:23:31.789 "serial_number": "00000000000000000000", 00:23:31.789 "model_number": "SPDK bdev Controller", 00:23:31.789 "max_namespaces": 32, 00:23:31.789 "min_cntlid": 1, 00:23:31.789 "max_cntlid": 65519, 00:23:31.789 "ana_reporting": false 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_subsystem_add_host", 00:23:31.789 "params": { 00:23:31.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.789 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.789 "psk": "key0" 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_subsystem_add_ns", 00:23:31.789 "params": { 00:23:31.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.789 "namespace": { 00:23:31.789 "nsid": 1, 00:23:31.789 "bdev_name": "malloc0", 00:23:31.789 "nguid": "9D6B7188069449D4A1A1AF66C5F0D2ED", 00:23:31.789 "uuid": "9d6b7188-0694-49d4-a1a1-af66c5f0d2ed", 00:23:31.789 "no_auto_visible": false 00:23:31.789 } 00:23:31.789 } 00:23:31.789 }, 00:23:31.789 { 00:23:31.789 "method": "nvmf_subsystem_add_listener", 00:23:31.789 "params": { 00:23:31.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.789 "listen_address": { 00:23:31.789 "trtype": "TCP", 00:23:31.789 "adrfam": "IPv4", 00:23:31.789 "traddr": "10.0.0.2", 00:23:31.789 "trsvcid": "4420" 00:23:31.789 }, 00:23:31.789 "secure_channel": true 00:23:31.789 } 00:23:31.789 } 00:23:31.789 ] 00:23:31.789 } 00:23:31.789 ] 00:23:31.789 }' 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=365418 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 365418 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 365418 ']' 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:31.789 16:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.789 [2024-07-15 16:22:14.672353] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:31.789 [2024-07-15 16:22:14.672448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.789 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.789 [2024-07-15 16:22:14.741351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.049 [2024-07-15 16:22:14.832751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.049 [2024-07-15 16:22:14.832821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.050 [2024-07-15 16:22:14.832844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.050 [2024-07-15 16:22:14.832856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.050 [2024-07-15 16:22:14.832865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.050 [2024-07-15 16:22:14.832938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.310 [2024-07-15 16:22:15.073714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.310 [2024-07-15 16:22:15.105750] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.310 [2024-07-15 16:22:15.115887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=365572 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 365572 /var/tmp/bdevperf.sock 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 365572 ']' 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.877 16:22:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:32.877 "subsystems": [ 00:23:32.877 { 00:23:32.877 "subsystem": "keyring", 00:23:32.877 "config": [ 00:23:32.877 { 00:23:32.877 "method": "keyring_file_add_key", 00:23:32.877 "params": { 00:23:32.877 "name": "key0", 00:23:32.877 "path": "/tmp/tmp.UTwCVTPOr7" 00:23:32.877 } 00:23:32.877 } 00:23:32.877 ] 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "subsystem": "iobuf", 00:23:32.877 "config": [ 00:23:32.877 { 00:23:32.877 "method": "iobuf_set_options", 00:23:32.877 "params": { 00:23:32.877 "small_pool_count": 8192, 00:23:32.877 "large_pool_count": 1024, 00:23:32.877 "small_bufsize": 8192, 00:23:32.877 "large_bufsize": 135168 00:23:32.877 } 00:23:32.877 } 00:23:32.877 ] 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "subsystem": "sock", 00:23:32.877 "config": [ 00:23:32.877 { 00:23:32.877 "method": "sock_set_default_impl", 00:23:32.877 "params": { 00:23:32.877 "impl_name": "posix" 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "sock_impl_set_options", 00:23:32.877 "params": { 00:23:32.877 "impl_name": "ssl", 00:23:32.877 "recv_buf_size": 4096, 00:23:32.877 "send_buf_size": 4096, 00:23:32.877 "enable_recv_pipe": true, 00:23:32.877 "enable_quickack": false, 00:23:32.877 "enable_placement_id": 0, 00:23:32.877 "enable_zerocopy_send_server": true, 00:23:32.877 "enable_zerocopy_send_client": false, 00:23:32.877 "zerocopy_threshold": 0, 00:23:32.877 "tls_version": 0, 00:23:32.877 "enable_ktls": false 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "sock_impl_set_options", 00:23:32.877 "params": { 00:23:32.877 "impl_name": "posix", 00:23:32.877 "recv_buf_size": 2097152, 00:23:32.877 "send_buf_size": 2097152, 00:23:32.877 "enable_recv_pipe": true, 00:23:32.877 "enable_quickack": false, 00:23:32.877 "enable_placement_id": 0, 00:23:32.877 "enable_zerocopy_send_server": true, 00:23:32.877 "enable_zerocopy_send_client": false, 00:23:32.877 "zerocopy_threshold": 0, 00:23:32.877 "tls_version": 0, 00:23:32.877 "enable_ktls": false 00:23:32.877 } 00:23:32.877 } 00:23:32.877 ] 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "subsystem": "vmd", 00:23:32.877 "config": [] 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "subsystem": "accel", 00:23:32.877 "config": [ 00:23:32.877 { 00:23:32.877 "method": "accel_set_options", 00:23:32.877 "params": { 00:23:32.877 "small_cache_size": 128, 00:23:32.877 "large_cache_size": 16, 00:23:32.877 "task_count": 2048, 00:23:32.877 "sequence_count": 2048, 00:23:32.877 "buf_count": 2048 00:23:32.877 } 00:23:32.877 } 00:23:32.877 ] 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "subsystem": "bdev", 00:23:32.877 "config": [ 00:23:32.877 { 00:23:32.877 "method": "bdev_set_options", 00:23:32.877 "params": { 00:23:32.877 "bdev_io_pool_size": 65535, 00:23:32.877 "bdev_io_cache_size": 256, 00:23:32.877 "bdev_auto_examine": true, 00:23:32.877 "iobuf_small_cache_size": 128, 00:23:32.877 "iobuf_large_cache_size": 16 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "bdev_raid_set_options", 00:23:32.877 "params": { 00:23:32.877 "process_window_size_kb": 1024 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "bdev_iscsi_set_options", 00:23:32.877 "params": { 00:23:32.877 "timeout_sec": 30 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "bdev_nvme_set_options", 00:23:32.877 "params": { 00:23:32.877 "action_on_timeout": "none", 00:23:32.877 "timeout_us": 0, 00:23:32.877 "timeout_admin_us": 0, 00:23:32.877 "keep_alive_timeout_ms": 10000, 00:23:32.877 "arbitration_burst": 0, 00:23:32.877 "low_priority_weight": 0, 00:23:32.877 "medium_priority_weight": 0, 00:23:32.877 "high_priority_weight": 0, 00:23:32.877 "nvme_adminq_poll_period_us": 10000, 00:23:32.877 "nvme_ioq_poll_period_us": 0, 00:23:32.877 "io_queue_requests": 512, 00:23:32.877 "delay_cmd_submit": true, 00:23:32.877 "transport_retry_count": 4, 00:23:32.877 "bdev_retry_count": 3, 00:23:32.877 "transport_ack_timeout": 0, 00:23:32.877 "ctrlr_loss_timeout_sec": 0, 00:23:32.877 "reconnect_delay_sec": 0, 00:23:32.877 "fast_io_fail_timeout_sec": 0, 00:23:32.877 "disable_auto_failback": false, 00:23:32.877 "generate_uuids": false, 00:23:32.877 "transport_tos": 0, 00:23:32.877 "nvme_error_stat": false, 00:23:32.877 "rdma_srq_size": 0, 00:23:32.877 "io_path_stat": false, 00:23:32.877 "allow_accel_sequence": false, 00:23:32.877 "rdma_max_cq_size": 0, 00:23:32.877 "rdma_cm_event_timeout_ms": 0, 00:23:32.877 "dhchap_digests": [ 00:23:32.877 "sha256", 00:23:32.877 "sha384", 00:23:32.877 "sha512" 00:23:32.877 ], 00:23:32.877 "dhchap_dhgroups": [ 00:23:32.877 "null", 00:23:32.877 "ffdhe2048", 00:23:32.877 "ffdhe3072", 00:23:32.877 "ffdhe4096", 00:23:32.877 "ffdhe6144", 00:23:32.877 "ffdhe8192" 00:23:32.877 ] 00:23:32.877 } 00:23:32.877 }, 00:23:32.877 { 00:23:32.877 "method": "bdev_nvme_attach_controller", 00:23:32.877 "params": { 00:23:32.877 "name": "nvme0", 00:23:32.877 "trtype": "TCP", 00:23:32.877 "adrfam": "IPv4", 00:23:32.877 "traddr": "10.0.0.2", 00:23:32.878 "trsvcid": "4420", 00:23:32.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.878 "prchk_reftag": false, 00:23:32.878 "prchk_guard": false, 00:23:32.878 "ctrlr_loss_timeout_sec": 0, 00:23:32.878 "reconnect_delay_sec": 0, 00:23:32.878 "fast_io_fail_timeout_sec": 0, 00:23:32.878 "psk": "key0", 00:23:32.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.878 "hdgst": false, 00:23:32.878 "ddgst": false 00:23:32.878 } 00:23:32.878 }, 00:23:32.878 { 00:23:32.878 "method": "bdev_nvme_set_hotplug", 00:23:32.878 "params": { 00:23:32.878 "period_us": 100000, 00:23:32.878 "enable": false 00:23:32.878 } 00:23:32.878 }, 00:23:32.878 { 00:23:32.878 "method": "bdev_enable_histogram", 00:23:32.878 "params": { 00:23:32.878 "name": "nvme0n1", 00:23:32.878 "enable": true 00:23:32.878 } 00:23:32.878 }, 00:23:32.878 { 00:23:32.878 "method": "bdev_wait_for_examine" 00:23:32.878 } 00:23:32.878 ] 00:23:32.878 }, 00:23:32.878 { 00:23:32.878 "subsystem": "nbd", 00:23:32.878 "config": [] 00:23:32.878 } 00:23:32.878 ] 00:23:32.878 }' 00:23:32.878 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.878 16:22:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.878 [2024-07-15 16:22:15.730066] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:32.878 [2024-07-15 16:22:15.730153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365572 ] 00:23:32.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.878 [2024-07-15 16:22:15.792896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.140 [2024-07-15 16:22:15.884371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.140 [2024-07-15 16:22:16.062296] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.788 16:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.788 16:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.788 16:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.788 16:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:34.045 16:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.045 16:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.302 Running I/O for 1 seconds... 00:23:35.232 00:23:35.232 Latency(us) 00:23:35.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.232 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:35.232 Verification LBA range: start 0x0 length 0x2000 00:23:35.232 nvme0n1 : 1.02 3068.58 11.99 0.00 0.00 41228.80 6747.78 55535.69 00:23:35.232 =================================================================================================================== 00:23:35.232 Total : 3068.58 11.99 0.00 0.00 41228.80 6747.78 55535.69 00:23:35.232 0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.232 nvmf_trace.0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 365572 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 365572 ']' 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 365572 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 365572 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 365572' 00:23:35.232 killing process with pid 365572 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 365572 00:23:35.232 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.232 00:23:35.232 Latency(us) 00:23:35.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.232 =================================================================================================================== 00:23:35.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.232 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 365572 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.534 rmmod nvme_tcp 00:23:35.534 rmmod nvme_fabrics 00:23:35.534 rmmod nvme_keyring 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 365418 ']' 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 365418 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 365418 ']' 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 365418 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.534 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 365418 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 365418' 00:23:35.791 killing process with pid 365418 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 365418 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 365418 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.791 16:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.330 16:22:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.330 16:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gFelRcw4SW /tmp/tmp.HI6y5ChPeQ /tmp/tmp.UTwCVTPOr7 00:23:38.330 00:23:38.330 real 1m19.144s 00:23:38.330 user 2m6.290s 00:23:38.330 sys 0m28.259s 00:23:38.330 16:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:38.330 16:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.330 ************************************ 00:23:38.330 END TEST nvmf_tls 00:23:38.330 ************************************ 00:23:38.330 16:22:20 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.330 16:22:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:38.330 16:22:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:38.330 16:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.330 ************************************ 00:23:38.330 START TEST nvmf_fips 00:23:38.330 ************************************ 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:38.330 * Looking for test storage... 00:23:38.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.330 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:38.331 16:22:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:38.331 Error setting digest 00:23:38.331 00C20840777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:38.331 00C20840777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.331 16:22:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:40.264 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:40.264 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.264 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:40.265 Found net devices under 0000:84:00.0: cvl_0_0 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:40.265 Found net devices under 0000:84:00.1: cvl_0_1 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.265 16:22:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:23:40.265 00:23:40.265 --- 10.0.0.2 ping statistics --- 00:23:40.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.265 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:40.265 00:23:40.265 --- 10.0.0.1 ping statistics --- 00:23:40.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.265 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=367833 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 367833 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 367833 ']' 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.265 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.265 [2024-07-15 16:22:23.177252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:40.265 [2024-07-15 16:22:23.177356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.265 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.524 [2024-07-15 16:22:23.243708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.524 [2024-07-15 16:22:23.331407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.524 [2024-07-15 16:22:23.331470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.524 [2024-07-15 16:22:23.331499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.524 [2024-07-15 16:22:23.331510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.524 [2024-07-15 16:22:23.331520] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.524 [2024-07-15 16:22:23.331561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:40.524 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:40.783 [2024-07-15 16:22:23.705261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.783 [2024-07-15 16:22:23.721280] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.783 [2024-07-15 16:22:23.721516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.783 [2024-07-15 16:22:23.753946] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:40.783 malloc0 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=367976 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 367976 /var/tmp/bdevperf.sock 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 367976 ']' 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.042 16:22:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:41.042 [2024-07-15 16:22:23.842712] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:41.042 [2024-07-15 16:22:23.842820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367976 ] 00:23:41.042 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.042 [2024-07-15 16:22:23.902419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.042 [2024-07-15 16:22:23.989665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.300 16:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.300 16:22:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:41.300 16:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:41.558 [2024-07-15 16:22:24.315760] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.558 [2024-07-15 16:22:24.315888] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:41.558 TLSTESTn1 00:23:41.558 16:22:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.558 Running I/O for 10 seconds... 00:23:53.758 00:23:53.758 Latency(us) 00:23:53.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.758 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.758 Verification LBA range: start 0x0 length 0x2000 00:23:53.758 TLSTESTn1 : 10.03 3557.45 13.90 0.00 0.00 35913.47 6407.96 50875.35 00:23:53.758 =================================================================================================================== 00:23:53.758 Total : 3557.45 13.90 0.00 0.00 35913.47 6407.96 50875.35 00:23:53.758 0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:53.758 nvmf_trace.0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 367976 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 367976 ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 367976 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 367976 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 367976' 00:23:53.758 killing process with pid 367976 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 367976 00:23:53.758 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.758 00:23:53.758 Latency(us) 00:23:53.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.758 =================================================================================================================== 00:23:53.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.758 [2024-07-15 16:22:34.669439] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 367976 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.758 rmmod nvme_tcp 00:23:53.758 rmmod nvme_fabrics 00:23:53.758 rmmod nvme_keyring 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 367833 ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 367833 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 367833 ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 367833 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 367833 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 367833' 00:23:53.758 killing process with pid 367833 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 367833 00:23:53.758 [2024-07-15 16:22:34.974191] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:53.758 16:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 367833 00:23:53.758 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.758 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.758 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.759 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.759 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.759 16:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.759 16:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.759 16:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.325 16:22:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.325 16:22:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.325 00:23:54.325 real 0m16.414s 00:23:54.325 user 0m20.200s 00:23:54.325 sys 0m6.488s 00:23:54.325 16:22:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.325 16:22:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:54.325 ************************************ 00:23:54.325 END TEST nvmf_fips 00:23:54.325 ************************************ 00:23:54.325 16:22:37 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:54.325 16:22:37 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:54.325 16:22:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:54.325 16:22:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.325 16:22:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.584 ************************************ 00:23:54.584 START TEST nvmf_fuzz 00:23:54.584 ************************************ 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:54.584 * Looking for test storage... 00:23:54.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.584 16:22:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:56.489 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:56.489 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.489 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:56.490 Found net devices under 0000:84:00.0: cvl_0_0 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:56.490 Found net devices under 0000:84:00.1: cvl_0_1 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.490 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:23:56.747 00:23:56.747 --- 10.0.0.2 ping statistics --- 00:23:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.747 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:23:56.747 00:23:56.747 --- 10.0.0.1 ping statistics --- 00:23:56.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.747 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=371239 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 371239 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 371239 ']' 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.747 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 Malloc0 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:57.006 16:22:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:29.080 Fuzzing completed. Shutting down the fuzz application 00:24:29.080 00:24:29.080 Dumping successful admin opcodes: 00:24:29.080 8, 9, 10, 24, 00:24:29.080 Dumping successful io opcodes: 00:24:29.080 0, 9, 00:24:29.080 NS: 0x200003aeff00 I/O qp, Total commands completed: 473236, total successful commands: 2735, random_seed: 1624899392 00:24:29.080 NS: 0x200003aeff00 admin qp, Total commands completed: 58608, total successful commands: 465, random_seed: 4279593024 00:24:29.080 16:23:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:29.080 Fuzzing completed. Shutting down the fuzz application 00:24:29.080 00:24:29.080 Dumping successful admin opcodes: 00:24:29.080 24, 00:24:29.080 Dumping successful io opcodes: 00:24:29.080 00:24:29.080 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3629765545 00:24:29.080 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3629869762 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.080 rmmod nvme_tcp 00:24:29.080 rmmod nvme_fabrics 00:24:29.080 rmmod nvme_keyring 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 371239 ']' 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 371239 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 371239 ']' 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 371239 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 371239 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 371239' 00:24:29.080 killing process with pid 371239 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 371239 00:24:29.080 16:23:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 371239 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.080 16:23:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.618 16:23:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.619 16:23:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:31.619 00:24:31.619 real 0m36.813s 00:24:31.619 user 0m50.325s 00:24:31.619 sys 0m15.853s 00:24:31.619 16:23:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:31.619 16:23:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:31.619 ************************************ 00:24:31.619 END TEST nvmf_fuzz 00:24:31.619 ************************************ 00:24:31.619 16:23:14 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.619 16:23:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:31.619 16:23:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:31.619 16:23:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.619 ************************************ 00:24:31.619 START TEST nvmf_multiconnection 00:24:31.619 ************************************ 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:31.619 * Looking for test storage... 00:24:31.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.619 16:23:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:33.524 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:33.524 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:33.524 Found net devices under 0000:84:00.0: cvl_0_0 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:33.524 Found net devices under 0000:84:00.1: cvl_0_1 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.524 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:24:33.525 00:24:33.525 --- 10.0.0.2 ping statistics --- 00:24:33.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.525 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:33.525 00:24:33.525 --- 10.0.0.1 ping statistics --- 00:24:33.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.525 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=376858 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 376858 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 376858 ']' 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.525 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.525 [2024-07-15 16:23:16.341649] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:33.525 [2024-07-15 16:23:16.341733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.525 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.525 [2024-07-15 16:23:16.409400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.525 [2024-07-15 16:23:16.502046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.525 [2024-07-15 16:23:16.502118] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.525 [2024-07-15 16:23:16.502134] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.525 [2024-07-15 16:23:16.502147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.525 [2024-07-15 16:23:16.502158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.525 [2024-07-15 16:23:16.502221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.525 [2024-07-15 16:23:16.502288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.525 [2024-07-15 16:23:16.502387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.525 [2024-07-15 16:23:16.502389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 [2024-07-15 16:23:16.638235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 Malloc1 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 [2024-07-15 16:23:16.692807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 Malloc2 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.785 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 Malloc3 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.045 Malloc4 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.045 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 Malloc5 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 Malloc6 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 Malloc7 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 Malloc8 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.046 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.308 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 Malloc9 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 Malloc10 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 Malloc11 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.309 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:34.950 16:23:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:34.950 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:34.950 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.950 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:34.950 16:23:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.847 16:23:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:37.782 16:23:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:37.782 16:23:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:37.782 16:23:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.782 16:23:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:37.782 16:23:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:39.681 16:23:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:40.243 16:23:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:40.243 16:23:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:40.243 16:23:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.243 16:23:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:40.243 16:23:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.769 16:23:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:43.028 16:23:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:43.028 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.028 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.028 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.028 16:23:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.564 16:23:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:45.824 16:23:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:45.824 16:23:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:45.824 16:23:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.824 16:23:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:45.824 16:23:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.727 16:23:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:48.663 16:23:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:48.663 16:23:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:48.663 16:23:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.663 16:23:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:48.663 16:23:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.570 16:23:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:51.507 16:23:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:51.507 16:23:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:51.507 16:23:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:51.507 16:23:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:51.507 16:23:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.410 16:23:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:54.346 16:23:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:54.346 16:23:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:54.346 16:23:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.346 16:23:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:54.346 16:23:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.251 16:23:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:57.189 16:23:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:57.189 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:57.189 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.189 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:57.189 16:23:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.090 16:23:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:00.025 16:23:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:00.025 16:23:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:00.025 16:23:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.025 16:23:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:00.025 16:23:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.925 16:23:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:01.926 16:23:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.926 16:23:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:02.861 16:23:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:02.861 16:23:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:02.861 16:23:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.861 16:23:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:02.861 16:23:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:04.784 16:23:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:05.059 [global] 00:25:05.059 thread=1 00:25:05.059 invalidate=1 00:25:05.059 rw=read 00:25:05.059 time_based=1 00:25:05.059 runtime=10 00:25:05.059 ioengine=libaio 00:25:05.059 direct=1 00:25:05.059 bs=262144 00:25:05.059 iodepth=64 00:25:05.059 norandommap=1 00:25:05.059 numjobs=1 00:25:05.059 00:25:05.059 [job0] 00:25:05.059 filename=/dev/nvme0n1 00:25:05.059 [job1] 00:25:05.059 filename=/dev/nvme10n1 00:25:05.059 [job2] 00:25:05.059 filename=/dev/nvme1n1 00:25:05.059 [job3] 00:25:05.059 filename=/dev/nvme2n1 00:25:05.059 [job4] 00:25:05.059 filename=/dev/nvme3n1 00:25:05.059 [job5] 00:25:05.059 filename=/dev/nvme4n1 00:25:05.059 [job6] 00:25:05.059 filename=/dev/nvme5n1 00:25:05.059 [job7] 00:25:05.059 filename=/dev/nvme6n1 00:25:05.059 [job8] 00:25:05.059 filename=/dev/nvme7n1 00:25:05.059 [job9] 00:25:05.059 filename=/dev/nvme8n1 00:25:05.059 [job10] 00:25:05.059 filename=/dev/nvme9n1 00:25:05.059 Could not set queue depth (nvme0n1) 00:25:05.059 Could not set queue depth (nvme10n1) 00:25:05.059 Could not set queue depth (nvme1n1) 00:25:05.059 Could not set queue depth (nvme2n1) 00:25:05.059 Could not set queue depth (nvme3n1) 00:25:05.059 Could not set queue depth (nvme4n1) 00:25:05.059 Could not set queue depth (nvme5n1) 00:25:05.059 Could not set queue depth (nvme6n1) 00:25:05.059 Could not set queue depth (nvme7n1) 00:25:05.059 Could not set queue depth (nvme8n1) 00:25:05.059 Could not set queue depth (nvme9n1) 00:25:05.316 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:05.316 fio-3.35 00:25:05.316 Starting 11 threads 00:25:17.519 00:25:17.519 job0: (groupid=0, jobs=1): err= 0: pid=381115: Mon Jul 15 16:23:58 2024 00:25:17.519 read: IOPS=662, BW=166MiB/s (174MB/s)(1667MiB/10074msec) 00:25:17.519 slat (usec): min=10, max=65066, avg=1034.79, stdev=3887.56 00:25:17.519 clat (usec): min=803, max=247228, avg=95518.12, stdev=40578.70 00:25:17.519 lat (usec): min=827, max=247260, avg=96552.91, stdev=40931.17 00:25:17.519 clat percentiles (msec): 00:25:17.519 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 47], 20.00th=[ 65], 00:25:17.519 | 30.00th=[ 75], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 104], 00:25:17.519 | 70.00th=[ 114], 80.00th=[ 128], 90.00th=[ 150], 95.00th=[ 165], 00:25:17.519 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 224], 00:25:17.519 | 99.99th=[ 247] 00:25:17.519 bw ( KiB/s): min=109056, max=262144, per=8.42%, avg=169071.60, stdev=43751.64, samples=20 00:25:17.519 iops : min= 426, max= 1024, avg=660.40, stdev=170.91, samples=20 00:25:17.519 lat (usec) : 1000=0.06% 00:25:17.519 lat (msec) : 2=0.10%, 4=0.60%, 10=1.83%, 20=2.28%, 50=5.79% 00:25:17.519 lat (msec) : 100=46.51%, 250=42.83% 00:25:17.519 cpu : usr=0.38%, sys=1.90%, ctx=1522, majf=0, minf=4097 00:25:17.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:17.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.519 issued rwts: total=6669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.519 job1: (groupid=0, jobs=1): err= 0: pid=381116: Mon Jul 15 16:23:58 2024 00:25:17.519 read: IOPS=869, BW=217MiB/s (228MB/s)(2178MiB/10023msec) 00:25:17.519 slat (usec): min=9, max=77029, avg=515.70, stdev=2612.17 00:25:17.519 clat (usec): min=752, max=222616, avg=73018.95, stdev=48293.50 00:25:17.519 lat (usec): min=778, max=240527, avg=73534.66, stdev=48578.79 00:25:17.519 clat percentiles (msec): 00:25:17.519 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 27], 00:25:17.519 | 30.00th=[ 43], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 78], 00:25:17.519 | 70.00th=[ 93], 80.00th=[ 114], 90.00th=[ 148], 95.00th=[ 165], 00:25:17.519 | 99.00th=[ 197], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 222], 00:25:17.519 | 99.99th=[ 224] 00:25:17.519 bw ( KiB/s): min=110080, max=468480, per=11.03%, avg=221374.60, stdev=83242.01, samples=20 00:25:17.519 iops : min= 430, max= 1830, avg=864.65, stdev=325.17, samples=20 00:25:17.519 lat (usec) : 1000=0.10% 00:25:17.519 lat (msec) : 2=0.25%, 4=1.54%, 10=4.78%, 20=9.96%, 50=17.23% 00:25:17.519 lat (msec) : 100=39.88%, 250=26.26% 00:25:17.519 cpu : usr=0.48%, sys=2.22%, ctx=2116, majf=0, minf=4097 00:25:17.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:17.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.519 issued rwts: total=8712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.519 job2: (groupid=0, jobs=1): err= 0: pid=381119: Mon Jul 15 16:23:58 2024 00:25:17.519 read: IOPS=808, BW=202MiB/s (212MB/s)(2043MiB/10110msec) 00:25:17.519 slat (usec): min=10, max=144150, avg=731.57, stdev=4031.69 00:25:17.519 clat (usec): min=1113, max=271596, avg=78376.02, stdev=44060.69 00:25:17.519 lat (usec): min=1192, max=279789, avg=79107.59, stdev=44471.63 00:25:17.519 clat percentiles (msec): 00:25:17.519 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 30], 20.00th=[ 41], 00:25:17.519 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 80], 00:25:17.519 | 70.00th=[ 96], 80.00th=[ 114], 90.00th=[ 142], 95.00th=[ 169], 00:25:17.519 | 99.00th=[ 201], 99.50th=[ 215], 99.90th=[ 224], 99.95th=[ 232], 00:25:17.519 | 99.99th=[ 271] 00:25:17.519 bw ( KiB/s): min=107008, max=367616, per=10.34%, avg=207507.75, stdev=78950.64, samples=20 00:25:17.519 iops : min= 418, max= 1436, avg=810.50, stdev=308.38, samples=20 00:25:17.519 lat (msec) : 2=0.11%, 4=0.22%, 10=1.26%, 20=2.95%, 50=23.02% 00:25:17.519 lat (msec) : 100=44.73%, 250=27.70%, 500=0.01% 00:25:17.519 cpu : usr=0.37%, sys=2.23%, ctx=1835, majf=0, minf=4097 00:25:17.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:17.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.519 issued rwts: total=8171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.519 job3: (groupid=0, jobs=1): err= 0: pid=381120: Mon Jul 15 16:23:58 2024 00:25:17.519 read: IOPS=611, BW=153MiB/s (160MB/s)(1540MiB/10073msec) 00:25:17.519 slat (usec): min=9, max=49692, avg=949.85, stdev=3538.96 00:25:17.519 clat (usec): min=1881, max=232351, avg=103595.55, stdev=43162.72 00:25:17.519 lat (usec): min=1903, max=233016, avg=104545.40, stdev=43512.85 00:25:17.519 clat percentiles (msec): 00:25:17.519 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 51], 20.00th=[ 70], 00:25:17.519 | 30.00th=[ 82], 40.00th=[ 91], 50.00th=[ 102], 60.00th=[ 113], 00:25:17.519 | 70.00th=[ 130], 80.00th=[ 144], 90.00th=[ 163], 95.00th=[ 174], 00:25:17.519 | 99.00th=[ 192], 99.50th=[ 211], 99.90th=[ 218], 99.95th=[ 220], 00:25:17.519 | 99.99th=[ 232] 00:25:17.519 bw ( KiB/s): min=99840, max=224256, per=7.78%, avg=156045.05, stdev=35442.83, samples=20 00:25:17.519 iops : min= 390, max= 876, avg=609.50, stdev=138.47, samples=20 00:25:17.519 lat (msec) : 2=0.06%, 4=0.71%, 10=1.04%, 20=2.78%, 50=5.15% 00:25:17.519 lat (msec) : 100=39.09%, 250=51.17% 00:25:17.519 cpu : usr=0.33%, sys=1.72%, ctx=1543, majf=0, minf=4097 00:25:17.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:17.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.519 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.519 job4: (groupid=0, jobs=1): err= 0: pid=381121: Mon Jul 15 16:23:58 2024 00:25:17.519 read: IOPS=679, BW=170MiB/s (178MB/s)(1702MiB/10016msec) 00:25:17.519 slat (usec): min=10, max=114431, avg=916.22, stdev=4121.82 00:25:17.519 clat (msec): min=2, max=240, avg=93.16, stdev=48.73 00:25:17.519 lat (msec): min=2, max=264, avg=94.08, stdev=49.29 00:25:17.519 clat percentiles (msec): 00:25:17.519 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 31], 20.00th=[ 50], 00:25:17.519 | 30.00th=[ 64], 40.00th=[ 77], 50.00th=[ 89], 60.00th=[ 103], 00:25:17.519 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 176], 00:25:17.519 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 224], 99.95th=[ 236], 00:25:17.519 | 99.99th=[ 241] 00:25:17.519 bw ( KiB/s): min=93696, max=344064, per=8.60%, avg=172656.00, stdev=60410.21, samples=20 00:25:17.519 iops : min= 366, max= 1344, avg=674.40, stdev=235.98, samples=20 00:25:17.519 lat (msec) : 4=0.57%, 10=2.66%, 20=2.79%, 50=14.57%, 100=38.16% 00:25:17.519 lat (msec) : 250=41.25% 00:25:17.519 cpu : usr=0.40%, sys=1.77%, ctx=1650, majf=0, minf=4097 00:25:17.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:17.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=6808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job5: (groupid=0, jobs=1): err= 0: pid=381122: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=661, BW=165MiB/s (173MB/s)(1673MiB/10112msec) 00:25:17.520 slat (usec): min=9, max=128277, avg=937.13, stdev=4192.46 00:25:17.520 clat (msec): min=2, max=248, avg=95.63, stdev=48.25 00:25:17.520 lat (msec): min=2, max=285, avg=96.57, stdev=48.83 00:25:17.520 clat percentiles (msec): 00:25:17.520 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 48], 00:25:17.520 | 30.00th=[ 71], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 111], 00:25:17.520 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 169], 00:25:17.520 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 224], 99.95th=[ 224], 00:25:17.520 | 99.99th=[ 249] 00:25:17.520 bw ( KiB/s): min=99328, max=286720, per=8.45%, avg=169660.55, stdev=49262.35, samples=20 00:25:17.520 iops : min= 388, max= 1120, avg=662.70, stdev=192.41, samples=20 00:25:17.520 lat (msec) : 4=0.52%, 10=2.54%, 20=4.03%, 50=13.34%, 100=30.59% 00:25:17.520 lat (msec) : 250=48.97% 00:25:17.520 cpu : usr=0.37%, sys=1.87%, ctx=1546, majf=0, minf=4097 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job6: (groupid=0, jobs=1): err= 0: pid=381125: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=726, BW=182MiB/s (190MB/s)(1837MiB/10117msec) 00:25:17.520 slat (usec): min=9, max=84849, avg=617.82, stdev=3301.04 00:25:17.520 clat (usec): min=785, max=241049, avg=87402.76, stdev=55584.93 00:25:17.520 lat (usec): min=807, max=255533, avg=88020.59, stdev=56056.80 00:25:17.520 clat percentiles (usec): 00:25:17.520 | 1.00th=[ 1958], 5.00th=[ 6980], 10.00th=[ 14746], 20.00th=[ 27919], 00:25:17.520 | 30.00th=[ 50594], 40.00th=[ 67634], 50.00th=[ 83362], 60.00th=[100140], 00:25:17.520 | 70.00th=[122160], 80.00th=[143655], 90.00th=[164627], 95.00th=[179307], 00:25:17.520 | 99.00th=[204473], 99.50th=[212861], 99.90th=[225444], 99.95th=[235930], 00:25:17.520 | 99.99th=[240124] 00:25:17.520 bw ( KiB/s): min=94019, max=361984, per=9.29%, avg=186420.70, stdev=71549.08, samples=20 00:25:17.520 iops : min= 367, max= 1414, avg=728.15, stdev=279.46, samples=20 00:25:17.520 lat (usec) : 1000=0.10% 00:25:17.520 lat (msec) : 2=0.93%, 4=1.86%, 10=3.82%, 20=8.81%, 50=14.28% 00:25:17.520 lat (msec) : 100=29.98%, 250=40.23% 00:25:17.520 cpu : usr=0.37%, sys=1.93%, ctx=1958, majf=0, minf=4097 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=7348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job7: (groupid=0, jobs=1): err= 0: pid=381126: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=604, BW=151MiB/s (158MB/s)(1528MiB/10110msec) 00:25:17.520 slat (usec): min=10, max=44579, avg=1491.68, stdev=4397.61 00:25:17.520 clat (usec): min=1376, max=229953, avg=104268.43, stdev=43357.75 00:25:17.520 lat (usec): min=1410, max=244545, avg=105760.10, stdev=44031.27 00:25:17.520 clat percentiles (msec): 00:25:17.520 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 56], 20.00th=[ 69], 00:25:17.520 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 112], 00:25:17.520 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 167], 95.00th=[ 182], 00:25:17.520 | 99.00th=[ 209], 99.50th=[ 213], 99.90th=[ 228], 99.95th=[ 230], 00:25:17.520 | 99.99th=[ 230] 00:25:17.520 bw ( KiB/s): min=79360, max=238592, per=7.71%, avg=154784.05, stdev=48974.96, samples=20 00:25:17.520 iops : min= 310, max= 932, avg=604.60, stdev=191.31, samples=20 00:25:17.520 lat (msec) : 2=0.02%, 4=0.51%, 10=1.41%, 20=1.52%, 50=3.52% 00:25:17.520 lat (msec) : 100=42.96%, 250=50.07% 00:25:17.520 cpu : usr=0.33%, sys=1.67%, ctx=1309, majf=0, minf=4097 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=6111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job8: (groupid=0, jobs=1): err= 0: pid=381128: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=848, BW=212MiB/s (222MB/s)(2144MiB/10111msec) 00:25:17.520 slat (usec): min=9, max=97456, avg=482.78, stdev=3187.43 00:25:17.520 clat (usec): min=748, max=241862, avg=74876.75, stdev=49623.05 00:25:17.520 lat (usec): min=766, max=290707, avg=75359.53, stdev=50067.32 00:25:17.520 clat percentiles (msec): 00:25:17.520 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 25], 00:25:17.520 | 30.00th=[ 38], 40.00th=[ 52], 50.00th=[ 71], 60.00th=[ 91], 00:25:17.520 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 144], 95.00th=[ 163], 00:25:17.520 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 213], 99.95th=[ 215], 00:25:17.520 | 99.99th=[ 243] 00:25:17.520 bw ( KiB/s): min=115712, max=359936, per=10.86%, avg=217916.20, stdev=76363.69, samples=20 00:25:17.520 iops : min= 452, max= 1406, avg=851.20, stdev=298.26, samples=20 00:25:17.520 lat (usec) : 750=0.01%, 1000=0.02% 00:25:17.520 lat (msec) : 2=0.47%, 4=1.99%, 10=4.87%, 20=6.95%, 50=24.81% 00:25:17.520 lat (msec) : 100=26.71%, 250=34.16% 00:25:17.520 cpu : usr=0.42%, sys=2.36%, ctx=2245, majf=0, minf=4097 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=8577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job9: (groupid=0, jobs=1): err= 0: pid=381129: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=648, BW=162MiB/s (170MB/s)(1625MiB/10025msec) 00:25:17.520 slat (usec): min=10, max=117449, avg=1222.56, stdev=4448.94 00:25:17.520 clat (usec): min=835, max=269445, avg=97382.00, stdev=47554.26 00:25:17.520 lat (usec): min=855, max=269457, avg=98604.56, stdev=48170.35 00:25:17.520 clat percentiles (msec): 00:25:17.520 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 57], 00:25:17.520 | 30.00th=[ 66], 40.00th=[ 79], 50.00th=[ 94], 60.00th=[ 110], 00:25:17.520 | 70.00th=[ 128], 80.00th=[ 142], 90.00th=[ 161], 95.00th=[ 174], 00:25:17.520 | 99.00th=[ 215], 99.50th=[ 224], 99.90th=[ 230], 99.95th=[ 239], 00:25:17.520 | 99.99th=[ 271] 00:25:17.520 bw ( KiB/s): min=96063, max=283081, per=8.21%, avg=164722.45, stdev=54759.67, samples=20 00:25:17.520 iops : min= 375, max= 1105, avg=643.35, stdev=213.82, samples=20 00:25:17.520 lat (usec) : 1000=0.05% 00:25:17.520 lat (msec) : 2=0.05%, 4=0.94%, 10=1.88%, 20=1.86%, 50=9.34% 00:25:17.520 lat (msec) : 100=39.53%, 250=46.35%, 500=0.02% 00:25:17.520 cpu : usr=0.29%, sys=1.75%, ctx=1556, majf=0, minf=4097 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=6499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 job10: (groupid=0, jobs=1): err= 0: pid=381130: Mon Jul 15 16:23:58 2024 00:25:17.520 read: IOPS=751, BW=188MiB/s (197MB/s)(1892MiB/10072msec) 00:25:17.520 slat (usec): min=9, max=88159, avg=816.64, stdev=3585.98 00:25:17.520 clat (usec): min=1124, max=245397, avg=84295.79, stdev=50604.51 00:25:17.520 lat (usec): min=1148, max=260382, avg=85112.43, stdev=51102.73 00:25:17.520 clat percentiles (msec): 00:25:17.520 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 38], 00:25:17.520 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 94], 00:25:17.520 | 70.00th=[ 113], 80.00th=[ 136], 90.00th=[ 159], 95.00th=[ 174], 00:25:17.520 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 222], 99.95th=[ 226], 00:25:17.520 | 99.99th=[ 247] 00:25:17.520 bw ( KiB/s): min=105984, max=328047, per=9.57%, avg=192026.35, stdev=71471.92, samples=20 00:25:17.520 iops : min= 414, max= 1281, avg=750.05, stdev=279.13, samples=20 00:25:17.520 lat (msec) : 2=0.09%, 4=0.30%, 10=2.85%, 20=4.47%, 50=21.79% 00:25:17.520 lat (msec) : 100=33.86%, 250=36.63% 00:25:17.520 cpu : usr=0.38%, sys=1.55%, ctx=1727, majf=0, minf=3721 00:25:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:17.520 issued rwts: total=7567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.520 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:17.520 00:25:17.520 Run status group 0 (all jobs): 00:25:17.520 READ: bw=1960MiB/s (2055MB/s), 151MiB/s-217MiB/s (158MB/s-228MB/s), io=19.4GiB (20.8GB), run=10016-10117msec 00:25:17.520 00:25:17.520 Disk stats (read/write): 00:25:17.520 nvme0n1: ios=13112/0, merge=0/0, ticks=1237919/0, in_queue=1237919, util=97.14% 00:25:17.520 nvme10n1: ios=17148/0, merge=0/0, ticks=1246073/0, in_queue=1246073, util=97.33% 00:25:17.520 nvme1n1: ios=16136/0, merge=0/0, ticks=1240537/0, in_queue=1240537, util=97.61% 00:25:17.521 nvme2n1: ios=12106/0, merge=0/0, ticks=1239388/0, in_queue=1239388, util=97.78% 00:25:17.521 nvme3n1: ios=13234/0, merge=0/0, ticks=1243364/0, in_queue=1243364, util=97.84% 00:25:17.521 nvme4n1: ios=13205/0, merge=0/0, ticks=1239062/0, in_queue=1239062, util=98.20% 00:25:17.521 nvme5n1: ios=14470/0, merge=0/0, ticks=1242173/0, in_queue=1242173, util=98.37% 00:25:17.521 nvme6n1: ios=12042/0, merge=0/0, ticks=1231409/0, in_queue=1231409, util=98.49% 00:25:17.521 nvme7n1: ios=16918/0, merge=0/0, ticks=1241534/0, in_queue=1241534, util=98.90% 00:25:17.521 nvme8n1: ios=12722/0, merge=0/0, ticks=1239276/0, in_queue=1239276, util=99.10% 00:25:17.521 nvme9n1: ios=14936/0, merge=0/0, ticks=1240655/0, in_queue=1240655, util=99.23% 00:25:17.521 16:23:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:17.521 [global] 00:25:17.521 thread=1 00:25:17.521 invalidate=1 00:25:17.521 rw=randwrite 00:25:17.521 time_based=1 00:25:17.521 runtime=10 00:25:17.521 ioengine=libaio 00:25:17.521 direct=1 00:25:17.521 bs=262144 00:25:17.521 iodepth=64 00:25:17.521 norandommap=1 00:25:17.521 numjobs=1 00:25:17.521 00:25:17.521 [job0] 00:25:17.521 filename=/dev/nvme0n1 00:25:17.521 [job1] 00:25:17.521 filename=/dev/nvme10n1 00:25:17.521 [job2] 00:25:17.521 filename=/dev/nvme1n1 00:25:17.521 [job3] 00:25:17.521 filename=/dev/nvme2n1 00:25:17.521 [job4] 00:25:17.521 filename=/dev/nvme3n1 00:25:17.521 [job5] 00:25:17.521 filename=/dev/nvme4n1 00:25:17.521 [job6] 00:25:17.521 filename=/dev/nvme5n1 00:25:17.521 [job7] 00:25:17.521 filename=/dev/nvme6n1 00:25:17.521 [job8] 00:25:17.521 filename=/dev/nvme7n1 00:25:17.521 [job9] 00:25:17.521 filename=/dev/nvme8n1 00:25:17.521 [job10] 00:25:17.521 filename=/dev/nvme9n1 00:25:17.521 Could not set queue depth (nvme0n1) 00:25:17.521 Could not set queue depth (nvme10n1) 00:25:17.521 Could not set queue depth (nvme1n1) 00:25:17.521 Could not set queue depth (nvme2n1) 00:25:17.521 Could not set queue depth (nvme3n1) 00:25:17.521 Could not set queue depth (nvme4n1) 00:25:17.521 Could not set queue depth (nvme5n1) 00:25:17.521 Could not set queue depth (nvme6n1) 00:25:17.521 Could not set queue depth (nvme7n1) 00:25:17.521 Could not set queue depth (nvme8n1) 00:25:17.521 Could not set queue depth (nvme9n1) 00:25:17.521 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.521 fio-3.35 00:25:17.521 Starting 11 threads 00:25:27.493 00:25:27.493 job0: (groupid=0, jobs=1): err= 0: pid=382212: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=446, BW=112MiB/s (117MB/s)(1138MiB/10191msec); 0 zone resets 00:25:27.493 slat (usec): min=17, max=54094, avg=1389.85, stdev=4151.66 00:25:27.493 clat (usec): min=1177, max=413975, avg=141694.86, stdev=76822.34 00:25:27.493 lat (usec): min=1249, max=414074, avg=143084.71, stdev=77962.93 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 36], 20.00th=[ 70], 00:25:27.493 | 30.00th=[ 95], 40.00th=[ 123], 50.00th=[ 148], 60.00th=[ 169], 00:25:27.493 | 70.00th=[ 190], 80.00th=[ 203], 90.00th=[ 220], 95.00th=[ 249], 00:25:27.493 | 99.00th=[ 368], 99.50th=[ 393], 99.90th=[ 409], 99.95th=[ 409], 00:25:27.493 | 99.99th=[ 414] 00:25:27.493 bw ( KiB/s): min=51200, max=190464, per=7.78%, avg=114866.30, stdev=40856.52, samples=20 00:25:27.493 iops : min= 200, max= 744, avg=448.60, stdev=159.59, samples=20 00:25:27.493 lat (msec) : 2=0.15%, 4=0.86%, 10=2.64%, 20=2.66%, 50=7.27% 00:25:27.493 lat (msec) : 100=17.73%, 250=63.91%, 500=4.79% 00:25:27.493 cpu : usr=1.73%, sys=1.32%, ctx=2925, majf=0, minf=1 00:25:27.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.493 issued rwts: total=0,4552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.493 job1: (groupid=0, jobs=1): err= 0: pid=382238: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=553, BW=138MiB/s (145MB/s)(1406MiB/10149msec); 0 zone resets 00:25:27.493 slat (usec): min=20, max=26624, avg=834.54, stdev=2824.92 00:25:27.493 clat (usec): min=989, max=364563, avg=114622.74, stdev=62992.20 00:25:27.493 lat (usec): min=1020, max=364638, avg=115457.28, stdev=63656.97 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 30], 20.00th=[ 55], 00:25:27.493 | 30.00th=[ 78], 40.00th=[ 93], 50.00th=[ 116], 60.00th=[ 132], 00:25:27.493 | 70.00th=[ 150], 80.00th=[ 169], 90.00th=[ 194], 95.00th=[ 224], 00:25:27.493 | 99.00th=[ 255], 99.50th=[ 266], 99.90th=[ 355], 99.95th=[ 363], 00:25:27.493 | 99.99th=[ 363] 00:25:27.493 bw ( KiB/s): min=69632, max=259072, per=9.64%, avg=142271.60, stdev=46409.29, samples=20 00:25:27.493 iops : min= 272, max= 1012, avg=555.60, stdev=181.31, samples=20 00:25:27.493 lat (usec) : 1000=0.02% 00:25:27.493 lat (msec) : 2=0.28%, 4=0.71%, 10=2.74%, 20=3.63%, 50=10.30% 00:25:27.493 lat (msec) : 100=25.28%, 250=55.60%, 500=1.44% 00:25:27.493 cpu : usr=2.32%, sys=1.49%, ctx=3949, majf=0, minf=1 00:25:27.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.493 issued rwts: total=0,5622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.493 job2: (groupid=0, jobs=1): err= 0: pid=382274: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=694, BW=174MiB/s (182MB/s)(1759MiB/10127msec); 0 zone resets 00:25:27.493 slat (usec): min=20, max=42052, avg=759.99, stdev=2438.53 00:25:27.493 clat (usec): min=844, max=264866, avg=91265.05, stdev=60411.87 00:25:27.493 lat (usec): min=889, max=266959, avg=92025.04, stdev=60970.96 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 40], 00:25:27.493 | 30.00th=[ 46], 40.00th=[ 55], 50.00th=[ 80], 60.00th=[ 107], 00:25:27.493 | 70.00th=[ 126], 80.00th=[ 150], 90.00th=[ 182], 95.00th=[ 201], 00:25:27.493 | 99.00th=[ 230], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 262], 00:25:27.493 | 99.99th=[ 266] 00:25:27.493 bw ( KiB/s): min=100151, max=348672, per=12.09%, avg=178494.05, stdev=73468.80, samples=20 00:25:27.493 iops : min= 391, max= 1362, avg=697.15, stdev=287.03, samples=20 00:25:27.493 lat (usec) : 1000=0.01% 00:25:27.493 lat (msec) : 2=0.21%, 4=0.71%, 10=3.74%, 20=5.78%, 50=26.56% 00:25:27.493 lat (msec) : 100=20.12%, 250=42.39%, 500=0.47% 00:25:27.493 cpu : usr=2.67%, sys=1.81%, ctx=4595, majf=0, minf=1 00:25:27.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.493 issued rwts: total=0,7037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.493 job3: (groupid=0, jobs=1): err= 0: pid=382304: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=505, BW=126MiB/s (132MB/s)(1282MiB/10149msec); 0 zone resets 00:25:27.493 slat (usec): min=26, max=64634, avg=1253.54, stdev=4087.64 00:25:27.493 clat (usec): min=1243, max=341331, avg=125276.82, stdev=79549.54 00:25:27.493 lat (usec): min=1802, max=341372, avg=126530.36, stdev=80650.36 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 42], 00:25:27.493 | 30.00th=[ 72], 40.00th=[ 94], 50.00th=[ 117], 60.00th=[ 150], 00:25:27.493 | 70.00th=[ 176], 80.00th=[ 203], 90.00th=[ 228], 95.00th=[ 264], 00:25:27.493 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 342], 00:25:27.493 | 99.99th=[ 342] 00:25:27.493 bw ( KiB/s): min=59392, max=230400, per=8.78%, avg=129611.30, stdev=47144.16, samples=20 00:25:27.493 iops : min= 232, max= 900, avg=506.25, stdev=184.19, samples=20 00:25:27.493 lat (msec) : 2=0.08%, 4=0.62%, 10=2.24%, 20=4.99%, 50=15.72% 00:25:27.493 lat (msec) : 100=18.16%, 250=52.03%, 500=6.16% 00:25:27.493 cpu : usr=1.83%, sys=1.59%, ctx=3388, majf=0, minf=1 00:25:27.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.493 issued rwts: total=0,5128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.493 job4: (groupid=0, jobs=1): err= 0: pid=382310: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=525, BW=131MiB/s (138MB/s)(1328MiB/10113msec); 0 zone resets 00:25:27.493 slat (usec): min=21, max=193354, avg=877.96, stdev=5113.57 00:25:27.493 clat (usec): min=1186, max=527555, avg=120928.28, stdev=81285.39 00:25:27.493 lat (usec): min=1220, max=527627, avg=121806.24, stdev=82176.32 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 50], 00:25:27.493 | 30.00th=[ 73], 40.00th=[ 90], 50.00th=[ 109], 60.00th=[ 131], 00:25:27.493 | 70.00th=[ 157], 80.00th=[ 188], 90.00th=[ 226], 95.00th=[ 253], 00:25:27.493 | 99.00th=[ 368], 99.50th=[ 460], 99.90th=[ 514], 99.95th=[ 523], 00:25:27.493 | 99.99th=[ 527] 00:25:27.493 bw ( KiB/s): min=55296, max=222786, per=9.09%, avg=134266.15, stdev=46926.74, samples=20 00:25:27.493 iops : min= 216, max= 870, avg=524.35, stdev=183.19, samples=20 00:25:27.493 lat (msec) : 2=0.21%, 4=0.66%, 10=2.96%, 20=4.93%, 50=11.58% 00:25:27.493 lat (msec) : 100=25.57%, 250=48.51%, 500=5.35%, 750=0.23% 00:25:27.493 cpu : usr=1.96%, sys=1.65%, ctx=3974, majf=0, minf=1 00:25:27.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:27.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.493 issued rwts: total=0,5310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.493 job5: (groupid=0, jobs=1): err= 0: pid=382315: Mon Jul 15 16:24:09 2024 00:25:27.493 write: IOPS=472, BW=118MiB/s (124MB/s)(1204MiB/10194msec); 0 zone resets 00:25:27.493 slat (usec): min=18, max=63583, avg=1428.14, stdev=4160.36 00:25:27.493 clat (usec): min=929, max=429188, avg=133913.38, stdev=84259.94 00:25:27.493 lat (usec): min=967, max=429236, avg=135341.52, stdev=85438.04 00:25:27.493 clat percentiles (msec): 00:25:27.493 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 23], 20.00th=[ 46], 00:25:27.493 | 30.00th=[ 69], 40.00th=[ 99], 50.00th=[ 140], 60.00th=[ 169], 00:25:27.493 | 70.00th=[ 188], 80.00th=[ 209], 90.00th=[ 239], 95.00th=[ 266], 00:25:27.493 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 418], 99.95th=[ 418], 00:25:27.493 | 99.99th=[ 430] 00:25:27.493 bw ( KiB/s): min=64512, max=246272, per=8.24%, avg=121663.20, stdev=48531.88, samples=20 00:25:27.493 iops : min= 252, max= 962, avg=475.20, stdev=189.61, samples=20 00:25:27.494 lat (usec) : 1000=0.02% 00:25:27.494 lat (msec) : 2=0.25%, 4=1.10%, 10=3.78%, 20=3.74%, 50=13.24% 00:25:27.494 lat (msec) : 100=18.48%, 250=52.21%, 500=7.18% 00:25:27.494 cpu : usr=2.03%, sys=1.25%, ctx=3036, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,4817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 job6: (groupid=0, jobs=1): err= 0: pid=382316: Mon Jul 15 16:24:09 2024 00:25:27.494 write: IOPS=551, BW=138MiB/s (145MB/s)(1386MiB/10046msec); 0 zone resets 00:25:27.494 slat (usec): min=29, max=129315, avg=913.19, stdev=3517.71 00:25:27.494 clat (msec): min=2, max=390, avg=114.94, stdev=66.80 00:25:27.494 lat (msec): min=3, max=396, avg=115.86, stdev=67.43 00:25:27.494 clat percentiles (msec): 00:25:27.494 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 53], 00:25:27.494 | 30.00th=[ 73], 40.00th=[ 89], 50.00th=[ 105], 60.00th=[ 128], 00:25:27.494 | 70.00th=[ 157], 80.00th=[ 180], 90.00th=[ 203], 95.00th=[ 222], 00:25:27.494 | 99.00th=[ 288], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 384], 00:25:27.494 | 99.99th=[ 393] 00:25:27.494 bw ( KiB/s): min=80384, max=214016, per=9.50%, avg=140213.95, stdev=40002.43, samples=20 00:25:27.494 iops : min= 314, max= 836, avg=547.60, stdev=156.27, samples=20 00:25:27.494 lat (msec) : 4=0.23%, 10=1.77%, 20=3.82%, 50=12.66%, 100=29.24% 00:25:27.494 lat (msec) : 250=50.37%, 500=1.89% 00:25:27.494 cpu : usr=2.18%, sys=1.61%, ctx=3901, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,5543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 job7: (groupid=0, jobs=1): err= 0: pid=382317: Mon Jul 15 16:24:09 2024 00:25:27.494 write: IOPS=492, BW=123MiB/s (129MB/s)(1251MiB/10159msec); 0 zone resets 00:25:27.494 slat (usec): min=19, max=49872, avg=1088.83, stdev=3636.82 00:25:27.494 clat (usec): min=849, max=371782, avg=128728.23, stdev=74844.49 00:25:27.494 lat (usec): min=878, max=371836, avg=129817.06, stdev=75750.22 00:25:27.494 clat percentiles (msec): 00:25:27.494 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 27], 20.00th=[ 54], 00:25:27.494 | 30.00th=[ 83], 40.00th=[ 106], 50.00th=[ 130], 60.00th=[ 155], 00:25:27.494 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 222], 95.00th=[ 243], 00:25:27.494 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 368], 99.95th=[ 372], 00:25:27.494 | 99.99th=[ 372] 00:25:27.494 bw ( KiB/s): min=55296, max=186506, per=8.57%, avg=126475.30, stdev=35612.69, samples=20 00:25:27.494 iops : min= 216, max= 728, avg=493.95, stdev=139.11, samples=20 00:25:27.494 lat (usec) : 1000=0.18% 00:25:27.494 lat (msec) : 2=0.34%, 4=0.76%, 10=2.44%, 20=3.60%, 50=11.71% 00:25:27.494 lat (msec) : 100=19.16%, 250=57.62%, 500=4.20% 00:25:27.494 cpu : usr=1.98%, sys=1.43%, ctx=3484, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,5005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 job8: (groupid=0, jobs=1): err= 0: pid=382318: Mon Jul 15 16:24:09 2024 00:25:27.494 write: IOPS=581, BW=145MiB/s (152MB/s)(1477MiB/10162msec); 0 zone resets 00:25:27.494 slat (usec): min=14, max=79874, avg=846.99, stdev=3106.63 00:25:27.494 clat (usec): min=1023, max=443607, avg=109192.17, stdev=70352.44 00:25:27.494 lat (usec): min=1071, max=452125, avg=110039.16, stdev=71114.08 00:25:27.494 clat percentiles (msec): 00:25:27.494 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 39], 00:25:27.494 | 30.00th=[ 61], 40.00th=[ 86], 50.00th=[ 108], 60.00th=[ 127], 00:25:27.494 | 70.00th=[ 148], 80.00th=[ 167], 90.00th=[ 199], 95.00th=[ 224], 00:25:27.494 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 418], 99.95th=[ 435], 00:25:27.494 | 99.99th=[ 443] 00:25:27.494 bw ( KiB/s): min=86016, max=230912, per=10.13%, avg=149532.85, stdev=38570.72, samples=20 00:25:27.494 iops : min= 336, max= 902, avg=584.00, stdev=150.66, samples=20 00:25:27.494 lat (msec) : 2=0.36%, 4=0.97%, 10=3.44%, 20=4.83%, 50=15.83% 00:25:27.494 lat (msec) : 100=20.89%, 250=51.37%, 500=2.32% 00:25:27.494 cpu : usr=2.39%, sys=1.64%, ctx=4322, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,5906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 job9: (groupid=0, jobs=1): err= 0: pid=382319: Mon Jul 15 16:24:09 2024 00:25:27.494 write: IOPS=463, BW=116MiB/s (121MB/s)(1180MiB/10182msec); 0 zone resets 00:25:27.494 slat (usec): min=18, max=36947, avg=1294.78, stdev=3554.04 00:25:27.494 clat (usec): min=1004, max=352312, avg=136696.39, stdev=64021.14 00:25:27.494 lat (usec): min=1025, max=352429, avg=137991.17, stdev=64743.75 00:25:27.494 clat percentiles (msec): 00:25:27.494 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 42], 20.00th=[ 79], 00:25:27.494 | 30.00th=[ 104], 40.00th=[ 126], 50.00th=[ 146], 60.00th=[ 161], 00:25:27.494 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 211], 95.00th=[ 222], 00:25:27.494 | 99.00th=[ 284], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 351], 00:25:27.494 | 99.99th=[ 351] 00:25:27.494 bw ( KiB/s): min=73728, max=189440, per=8.07%, avg=119161.20, stdev=36337.47, samples=20 00:25:27.494 iops : min= 288, max= 740, avg=465.40, stdev=141.94, samples=20 00:25:27.494 lat (msec) : 2=0.32%, 4=0.87%, 10=1.61%, 20=1.93%, 50=7.71% 00:25:27.494 lat (msec) : 100=16.53%, 250=69.08%, 500=1.95% 00:25:27.494 cpu : usr=1.79%, sys=1.28%, ctx=2941, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,4719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 job10: (groupid=0, jobs=1): err= 0: pid=382320: Mon Jul 15 16:24:09 2024 00:25:27.494 write: IOPS=505, BW=126MiB/s (133MB/s)(1287MiB/10181msec); 0 zone resets 00:25:27.494 slat (usec): min=22, max=75299, avg=1142.68, stdev=3757.82 00:25:27.494 clat (usec): min=849, max=385343, avg=125224.83, stdev=75403.62 00:25:27.494 lat (usec): min=888, max=385491, avg=126367.51, stdev=76370.63 00:25:27.494 clat percentiles (msec): 00:25:27.494 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 48], 00:25:27.494 | 30.00th=[ 73], 40.00th=[ 101], 50.00th=[ 130], 60.00th=[ 153], 00:25:27.494 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 213], 95.00th=[ 230], 00:25:27.494 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:25:27.494 | 99.99th=[ 384] 00:25:27.494 bw ( KiB/s): min=47616, max=231424, per=8.82%, avg=130160.25, stdev=51322.72, samples=20 00:25:27.494 iops : min= 186, max= 904, avg=508.30, stdev=200.50, samples=20 00:25:27.494 lat (usec) : 1000=0.10% 00:25:27.494 lat (msec) : 2=0.41%, 4=0.87%, 10=2.89%, 20=4.33%, 50=12.53% 00:25:27.494 lat (msec) : 100=18.68%, 250=56.96%, 500=3.22% 00:25:27.494 cpu : usr=1.95%, sys=1.52%, ctx=3470, majf=0, minf=1 00:25:27.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:27.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.494 issued rwts: total=0,5149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.494 00:25:27.494 Run status group 0 (all jobs): 00:25:27.494 WRITE: bw=1442MiB/s (1512MB/s), 112MiB/s-174MiB/s (117MB/s-182MB/s), io=14.4GiB (15.4GB), run=10046-10194msec 00:25:27.494 00:25:27.494 Disk stats (read/write): 00:25:27.494 nvme0n1: ios=42/9061, merge=0/0, ticks=1858/1242391, in_queue=1244249, util=99.60% 00:25:27.494 nvme10n1: ios=49/11243, merge=0/0, ticks=54/1259003, in_queue=1259057, util=97.45% 00:25:27.494 nvme1n1: ios=49/13821, merge=0/0, ticks=34/1223358, in_queue=1223392, util=97.45% 00:25:27.494 nvme2n1: ios=50/10254, merge=0/0, ticks=3269/1245530, in_queue=1248799, util=99.80% 00:25:27.494 nvme3n1: ios=46/10407, merge=0/0, ticks=2051/1177807, in_queue=1179858, util=100.00% 00:25:27.494 nvme4n1: ios=15/9590, merge=0/0, ticks=97/1241435, in_queue=1241532, util=98.45% 00:25:27.494 nvme5n1: ios=44/10717, merge=0/0, ticks=937/1227143, in_queue=1228080, util=100.00% 00:25:27.494 nvme6n1: ios=0/10003, merge=0/0, ticks=0/1254037, in_queue=1254037, util=98.36% 00:25:27.494 nvme7n1: ios=0/11802, merge=0/0, ticks=0/1254392, in_queue=1254392, util=98.78% 00:25:27.494 nvme8n1: ios=0/9391, merge=0/0, ticks=0/1245389, in_queue=1245389, util=98.85% 00:25:27.494 nvme9n1: ios=41/10269, merge=0/0, ticks=1847/1247938, in_queue=1249785, util=99.81% 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:27.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:27.494 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:27.494 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:27.495 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:27.495 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:27.495 16:24:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:27.495 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.495 16:24:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:27.495 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.495 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:27.753 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.753 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:28.011 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:28.011 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.011 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.270 16:24:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.270 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.270 16:24:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:28.270 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.270 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:28.529 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:28.529 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.529 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:28.788 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:28.788 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:28.788 16:24:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.789 rmmod nvme_tcp 00:25:28.789 rmmod nvme_fabrics 00:25:28.789 rmmod nvme_keyring 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 376858 ']' 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 376858 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 376858 ']' 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 376858 00:25:28.789 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 376858 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 376858' 00:25:29.049 killing process with pid 376858 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 376858 00:25:29.049 16:24:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 376858 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.620 16:24:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.531 16:24:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.531 00:25:31.531 real 1m0.212s 00:25:31.531 user 3m28.235s 00:25:31.531 sys 0m24.347s 00:25:31.531 16:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:31.531 16:24:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.531 ************************************ 00:25:31.531 END TEST nvmf_multiconnection 00:25:31.531 ************************************ 00:25:31.531 16:24:14 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:31.531 16:24:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:31.531 16:24:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:31.531 16:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.531 ************************************ 00:25:31.531 START TEST nvmf_initiator_timeout 00:25:31.531 ************************************ 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:31.531 * Looking for test storage... 00:25:31.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.531 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.532 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.532 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.532 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.532 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.532 16:24:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:34.067 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:34.068 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:34.068 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:34.068 Found net devices under 0000:84:00.0: cvl_0_0 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:34.068 Found net devices under 0000:84:00.1: cvl_0_1 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:25:34.068 00:25:34.068 --- 10.0.0.2 ping statistics --- 00:25:34.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.068 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:34.068 00:25:34.068 --- 10.0.0.1 ping statistics --- 00:25:34.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.068 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=386273 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 386273 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 386273 ']' 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.068 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.068 [2024-07-15 16:24:16.677947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:34.068 [2024-07-15 16:24:16.678020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.068 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.068 [2024-07-15 16:24:16.742960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.068 [2024-07-15 16:24:16.829464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.068 [2024-07-15 16:24:16.829507] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.068 [2024-07-15 16:24:16.829535] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.068 [2024-07-15 16:24:16.829546] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.068 [2024-07-15 16:24:16.829555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.068 [2024-07-15 16:24:16.829635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.069 [2024-07-15 16:24:16.829700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.069 [2024-07-15 16:24:16.829752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.069 [2024-07-15 16:24:16.829758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 16:24:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 Malloc0 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 Delay0 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 [2024-07-15 16:24:17.023655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.069 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 [2024-07-15 16:24:17.051953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.329 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:34.897 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:34.897 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:34.897 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.897 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:34.897 16:24:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=386576 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:36.800 16:24:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:36.800 [global] 00:25:36.800 thread=1 00:25:36.800 invalidate=1 00:25:36.800 rw=write 00:25:36.800 time_based=1 00:25:36.800 runtime=60 00:25:36.800 ioengine=libaio 00:25:36.800 direct=1 00:25:36.800 bs=4096 00:25:36.800 iodepth=1 00:25:36.800 norandommap=0 00:25:36.800 numjobs=1 00:25:36.800 00:25:36.800 verify_dump=1 00:25:36.800 verify_backlog=512 00:25:36.800 verify_state_save=0 00:25:36.800 do_verify=1 00:25:36.800 verify=crc32c-intel 00:25:36.800 [job0] 00:25:36.800 filename=/dev/nvme0n1 00:25:36.800 Could not set queue depth (nvme0n1) 00:25:37.057 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:37.057 fio-3.35 00:25:37.057 Starting 1 thread 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.335 true 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.335 true 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.335 true 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.335 true 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.335 16:24:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.871 true 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.871 true 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.871 true 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:42.871 true 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:42.871 16:24:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 386576 00:26:39.142 00:26:39.142 job0: (groupid=0, jobs=1): err= 0: pid=386769: Mon Jul 15 16:25:20 2024 00:26:39.142 read: IOPS=7, BW=31.3KiB/s (32.1kB/s)(1880KiB/60029msec) 00:26:39.142 slat (usec): min=7, max=13770, avg=45.41, stdev=634.48 00:26:39.142 clat (usec): min=309, max=40993k, avg=127375.05, stdev=1889028.58 00:26:39.142 lat (usec): min=318, max=40993k, avg=127420.46, stdev=1889027.23 00:26:39.142 clat percentiles (usec): 00:26:39.142 | 1.00th=[ 441], 5.00th=[ 40633], 10.00th=[ 41157], 00:26:39.142 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:26:39.142 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:39.142 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:39.142 | 99.00th=[ 42206], 99.50th=[ 43254], 99.90th=[17112761], 00:26:39.142 | 99.95th=[17112761], 99.99th=[17112761] 00:26:39.142 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60029msec); 0 zone resets 00:26:39.142 slat (nsec): min=7322, max=81327, avg=14147.60, stdev=9723.18 00:26:39.142 clat (usec): min=181, max=1650, avg=252.34, stdev=85.67 00:26:39.142 lat (usec): min=189, max=1677, avg=266.49, stdev=90.24 00:26:39.142 clat percentiles (usec): 00:26:39.142 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:26:39.142 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 243], 00:26:39.142 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 343], 95.00th=[ 388], 00:26:39.142 | 99.00th=[ 441], 99.50th=[ 510], 99.90th=[ 1647], 99.95th=[ 1647], 00:26:39.142 | 99.99th=[ 1647] 00:26:39.142 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:39.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:39.142 lat (usec) : 250=33.50%, 500=18.84%, 750=0.61% 00:26:39.142 lat (msec) : 2=0.10%, 50=46.84%, >=2000=0.10% 00:26:39.142 cpu : usr=0.02%, sys=0.02%, ctx=984, majf=0, minf=2 00:26:39.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.142 issued rwts: total=470,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:39.142 00:26:39.142 Run status group 0 (all jobs): 00:26:39.142 READ: bw=31.3KiB/s (32.1kB/s), 31.3KiB/s-31.3KiB/s (32.1kB/s-32.1kB/s), io=1880KiB (1925kB), run=60029-60029msec 00:26:39.142 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60029-60029msec 00:26:39.142 00:26:39.142 Disk stats (read/write): 00:26:39.142 nvme0n1: ios=565/512, merge=0/0, ticks=18864/116, in_queue=18980, util=99.76% 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:39.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:39.142 nvmf hotplug test: fio successful as expected 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.142 rmmod nvme_tcp 00:26:39.142 rmmod nvme_fabrics 00:26:39.142 rmmod nvme_keyring 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 386273 ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 386273 ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 386273' 00:26:39.142 killing process with pid 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 386273 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:39.142 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.143 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.143 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.143 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.143 16:25:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.712 16:25:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.712 00:26:39.712 real 1m8.124s 00:26:39.712 user 4m10.799s 00:26:39.712 sys 0m6.339s 00:26:39.712 16:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:39.712 16:25:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:39.712 ************************************ 00:26:39.712 END TEST nvmf_initiator_timeout 00:26:39.712 ************************************ 00:26:39.712 16:25:22 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:39.712 16:25:22 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:39.712 16:25:22 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:39.712 16:25:22 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.712 16:25:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:41.643 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:41.643 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:41.643 Found net devices under 0000:84:00.0: cvl_0_0 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:41.643 Found net devices under 0000:84:00.1: cvl_0_1 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:41.643 16:25:24 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:41.643 16:25:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:41.643 16:25:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.643 16:25:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.643 ************************************ 00:26:41.643 START TEST nvmf_perf_adq 00:26:41.643 ************************************ 00:26:41.643 16:25:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:41.643 * Looking for test storage... 00:26:41.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:41.643 16:25:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.643 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:41.643 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.644 16:25:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:43.550 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:43.550 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:43.550 Found net devices under 0000:84:00.0: cvl_0_0 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:43.550 Found net devices under 0000:84:00.1: cvl_0_1 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:43.550 16:25:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:44.487 16:25:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:46.390 16:25:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.659 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:51.660 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:51.660 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:51.660 Found net devices under 0000:84:00.0: cvl_0_0 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:51.660 Found net devices under 0000:84:00.1: cvl_0_1 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:26:51.660 00:26:51.660 --- 10.0.0.2 ping statistics --- 00:26:51.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.660 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:26:51.660 00:26:51.660 --- 10.0.0.1 ping statistics --- 00:26:51.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.660 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.660 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=398304 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 398304 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 398304 ']' 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.661 [2024-07-15 16:25:34.326183] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:51.661 [2024-07-15 16:25:34.326263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.661 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.661 [2024-07-15 16:25:34.396468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.661 [2024-07-15 16:25:34.488962] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.661 [2024-07-15 16:25:34.489019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.661 [2024-07-15 16:25:34.489035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.661 [2024-07-15 16:25:34.489049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.661 [2024-07-15 16:25:34.489060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.661 [2024-07-15 16:25:34.489151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.661 [2024-07-15 16:25:34.489220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.661 [2024-07-15 16:25:34.489316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.661 [2024-07-15 16:25:34.489318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.661 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 [2024-07-15 16:25:34.693233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 Malloc1 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.920 [2024-07-15 16:25:34.743688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=398345 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:51.920 16:25:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:51.920 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:53.825 "tick_rate": 2700000000, 00:26:53.825 "poll_groups": [ 00:26:53.825 { 00:26:53.825 "name": "nvmf_tgt_poll_group_000", 00:26:53.825 "admin_qpairs": 1, 00:26:53.825 "io_qpairs": 1, 00:26:53.825 "current_admin_qpairs": 1, 00:26:53.825 "current_io_qpairs": 1, 00:26:53.825 "pending_bdev_io": 0, 00:26:53.825 "completed_nvme_io": 19649, 00:26:53.825 "transports": [ 00:26:53.825 { 00:26:53.825 "trtype": "TCP" 00:26:53.825 } 00:26:53.825 ] 00:26:53.825 }, 00:26:53.825 { 00:26:53.825 "name": "nvmf_tgt_poll_group_001", 00:26:53.825 "admin_qpairs": 0, 00:26:53.825 "io_qpairs": 1, 00:26:53.825 "current_admin_qpairs": 0, 00:26:53.825 "current_io_qpairs": 1, 00:26:53.825 "pending_bdev_io": 0, 00:26:53.825 "completed_nvme_io": 20026, 00:26:53.825 "transports": [ 00:26:53.825 { 00:26:53.825 "trtype": "TCP" 00:26:53.825 } 00:26:53.825 ] 00:26:53.825 }, 00:26:53.825 { 00:26:53.825 "name": "nvmf_tgt_poll_group_002", 00:26:53.825 "admin_qpairs": 0, 00:26:53.825 "io_qpairs": 1, 00:26:53.825 "current_admin_qpairs": 0, 00:26:53.825 "current_io_qpairs": 1, 00:26:53.825 "pending_bdev_io": 0, 00:26:53.825 "completed_nvme_io": 20323, 00:26:53.825 "transports": [ 00:26:53.825 { 00:26:53.825 "trtype": "TCP" 00:26:53.825 } 00:26:53.825 ] 00:26:53.825 }, 00:26:53.825 { 00:26:53.825 "name": "nvmf_tgt_poll_group_003", 00:26:53.825 "admin_qpairs": 0, 00:26:53.825 "io_qpairs": 1, 00:26:53.825 "current_admin_qpairs": 0, 00:26:53.825 "current_io_qpairs": 1, 00:26:53.825 "pending_bdev_io": 0, 00:26:53.825 "completed_nvme_io": 19666, 00:26:53.825 "transports": [ 00:26:53.825 { 00:26:53.825 "trtype": "TCP" 00:26:53.825 } 00:26:53.825 ] 00:26:53.825 } 00:26:53.825 ] 00:26:53.825 }' 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:53.825 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:54.083 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:54.083 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:54.083 16:25:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 398345 00:27:02.197 Initializing NVMe Controllers 00:27:02.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:02.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:02.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:02.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:02.197 Initialization complete. Launching workers. 00:27:02.197 ======================================================== 00:27:02.197 Latency(us) 00:27:02.197 Device Information : IOPS MiB/s Average min max 00:27:02.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10245.50 40.02 6246.17 2921.70 8745.33 00:27:02.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10476.90 40.93 6110.95 2442.43 8934.35 00:27:02.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10621.50 41.49 6027.76 2627.76 9344.82 00:27:02.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10242.00 40.01 6249.80 2882.31 9454.48 00:27:02.197 ======================================================== 00:27:02.197 Total : 41585.90 162.44 6157.21 2442.43 9454.48 00:27:02.197 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.197 rmmod nvme_tcp 00:27:02.197 rmmod nvme_fabrics 00:27:02.197 rmmod nvme_keyring 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 398304 ']' 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 398304 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 398304 ']' 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 398304 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 398304 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 398304' 00:27:02.197 killing process with pid 398304 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 398304 00:27:02.197 16:25:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 398304 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.458 16:25:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.364 16:25:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.364 16:25:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:04.364 16:25:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:04.931 16:25:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:07.463 16:25:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:12.757 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:12.757 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.757 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:12.758 Found net devices under 0000:84:00.0: cvl_0_0 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:12.758 Found net devices under 0000:84:00.1: cvl_0_1 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:27:12.758 00:27:12.758 --- 10.0.0.2 ping statistics --- 00:27:12.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.758 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:27:12.758 00:27:12.758 --- 10.0.0.1 ping statistics --- 00:27:12.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.758 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.758 16:25:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:12.758 net.core.busy_poll = 1 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:12.758 net.core.busy_read = 1 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=400946 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 400946 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 400946 ']' 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.758 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 [2024-07-15 16:25:55.202209] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:12.759 [2024-07-15 16:25:55.202305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.759 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.759 [2024-07-15 16:25:55.276285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.759 [2024-07-15 16:25:55.368182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.759 [2024-07-15 16:25:55.368246] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.759 [2024-07-15 16:25:55.368272] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.759 [2024-07-15 16:25:55.368286] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.759 [2024-07-15 16:25:55.368298] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.759 [2024-07-15 16:25:55.368381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.759 [2024-07-15 16:25:55.368447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.759 [2024-07-15 16:25:55.368545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.759 [2024-07-15 16:25:55.368547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 [2024-07-15 16:25:55.572633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 Malloc1 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.759 [2024-07-15 16:25:55.625919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=400975 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:12.759 16:25:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:12.759 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:14.688 "tick_rate": 2700000000, 00:27:14.688 "poll_groups": [ 00:27:14.688 { 00:27:14.688 "name": "nvmf_tgt_poll_group_000", 00:27:14.688 "admin_qpairs": 1, 00:27:14.688 "io_qpairs": 1, 00:27:14.688 "current_admin_qpairs": 1, 00:27:14.688 "current_io_qpairs": 1, 00:27:14.688 "pending_bdev_io": 0, 00:27:14.688 "completed_nvme_io": 25446, 00:27:14.688 "transports": [ 00:27:14.688 { 00:27:14.688 "trtype": "TCP" 00:27:14.688 } 00:27:14.688 ] 00:27:14.688 }, 00:27:14.688 { 00:27:14.688 "name": "nvmf_tgt_poll_group_001", 00:27:14.688 "admin_qpairs": 0, 00:27:14.688 "io_qpairs": 3, 00:27:14.688 "current_admin_qpairs": 0, 00:27:14.688 "current_io_qpairs": 3, 00:27:14.688 "pending_bdev_io": 0, 00:27:14.688 "completed_nvme_io": 27092, 00:27:14.688 "transports": [ 00:27:14.688 { 00:27:14.688 "trtype": "TCP" 00:27:14.688 } 00:27:14.688 ] 00:27:14.688 }, 00:27:14.688 { 00:27:14.688 "name": "nvmf_tgt_poll_group_002", 00:27:14.688 "admin_qpairs": 0, 00:27:14.688 "io_qpairs": 0, 00:27:14.688 "current_admin_qpairs": 0, 00:27:14.688 "current_io_qpairs": 0, 00:27:14.688 "pending_bdev_io": 0, 00:27:14.688 "completed_nvme_io": 0, 00:27:14.688 "transports": [ 00:27:14.688 { 00:27:14.688 "trtype": "TCP" 00:27:14.688 } 00:27:14.688 ] 00:27:14.688 }, 00:27:14.688 { 00:27:14.688 "name": "nvmf_tgt_poll_group_003", 00:27:14.688 "admin_qpairs": 0, 00:27:14.688 "io_qpairs": 0, 00:27:14.688 "current_admin_qpairs": 0, 00:27:14.688 "current_io_qpairs": 0, 00:27:14.688 "pending_bdev_io": 0, 00:27:14.688 "completed_nvme_io": 0, 00:27:14.688 "transports": [ 00:27:14.688 { 00:27:14.688 "trtype": "TCP" 00:27:14.688 } 00:27:14.688 ] 00:27:14.688 } 00:27:14.688 ] 00:27:14.688 }' 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:14.688 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:14.948 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:14.948 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:14.948 16:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 400975 00:27:23.057 Initializing NVMe Controllers 00:27:23.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:23.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:23.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:23.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:23.057 Initialization complete. Launching workers. 00:27:23.057 ======================================================== 00:27:23.057 Latency(us) 00:27:23.057 Device Information : IOPS MiB/s Average min max 00:27:23.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13217.20 51.63 4857.77 1674.32 45943.06 00:27:23.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4984.60 19.47 12881.86 2164.98 59788.97 00:27:23.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4688.40 18.31 13654.75 1927.55 60528.34 00:27:23.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4486.70 17.53 14311.70 1982.98 60320.02 00:27:23.057 ======================================================== 00:27:23.057 Total : 27376.89 106.94 9374.63 1674.32 60528.34 00:27:23.057 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.057 rmmod nvme_tcp 00:27:23.057 rmmod nvme_fabrics 00:27:23.057 rmmod nvme_keyring 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 400946 ']' 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 400946 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 400946 ']' 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 400946 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 400946 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 400946' 00:27:23.057 killing process with pid 400946 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 400946 00:27:23.057 16:26:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 400946 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.317 16:26:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.603 16:26:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:26.603 16:26:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:26.603 00:27:26.603 real 0m44.679s 00:27:26.603 user 2m39.676s 00:27:26.603 sys 0m9.560s 00:27:26.603 16:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:26.603 16:26:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.603 ************************************ 00:27:26.603 END TEST nvmf_perf_adq 00:27:26.603 ************************************ 00:27:26.603 16:26:09 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.603 16:26:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:26.603 16:26:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:26.603 16:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.603 ************************************ 00:27:26.603 START TEST nvmf_shutdown 00:27:26.603 ************************************ 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.603 * Looking for test storage... 00:27:26.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:26.603 16:26:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.603 ************************************ 00:27:26.603 START TEST nvmf_shutdown_tc1 00:27:26.604 ************************************ 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.604 16:26:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:28.502 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:28.502 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:28.502 Found net devices under 0000:84:00.0: cvl_0_0 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:28.502 Found net devices under 0000:84:00.1: cvl_0_1 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:27:28.502 00:27:28.502 --- 10.0.0.2 ping statistics --- 00:27:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.502 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:27:28.502 00:27:28.502 --- 10.0.0.1 ping statistics --- 00:27:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.502 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.502 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=404278 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 404278 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 404278 ']' 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.503 16:26:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.503 [2024-07-15 16:26:11.449137] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:28.503 [2024-07-15 16:26:11.449224] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.761 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.762 [2024-07-15 16:26:11.525405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.762 [2024-07-15 16:26:11.616964] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.762 [2024-07-15 16:26:11.617027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.762 [2024-07-15 16:26:11.617054] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.762 [2024-07-15 16:26:11.617068] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.762 [2024-07-15 16:26:11.617080] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.762 [2024-07-15 16:26:11.617175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.762 [2024-07-15 16:26:11.617270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.762 [2024-07-15 16:26:11.617336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:28.762 [2024-07-15 16:26:11.617338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.699 [2024-07-15 16:26:12.442715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.699 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.699 Malloc1 00:27:29.699 [2024-07-15 16:26:12.517775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.699 Malloc2 00:27:29.699 Malloc3 00:27:29.699 Malloc4 00:27:29.957 Malloc5 00:27:29.957 Malloc6 00:27:29.957 Malloc7 00:27:29.957 Malloc8 00:27:29.957 Malloc9 00:27:29.957 Malloc10 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=404464 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 404464 /var/tmp/bdevperf.sock 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 404464 ']' 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.214 { 00:27:30.214 "params": { 00:27:30.214 "name": "Nvme$subsystem", 00:27:30.214 "trtype": "$TEST_TRANSPORT", 00:27:30.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.214 "adrfam": "ipv4", 00:27:30.214 "trsvcid": "$NVMF_PORT", 00:27:30.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.214 "hdgst": ${hdgst:-false}, 00:27:30.214 "ddgst": ${ddgst:-false} 00:27:30.214 }, 00:27:30.214 "method": "bdev_nvme_attach_controller" 00:27:30.214 } 00:27:30.214 EOF 00:27:30.214 )") 00:27:30.214 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.215 "name": "Nvme$subsystem", 00:27:30.215 "trtype": "$TEST_TRANSPORT", 00:27:30.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.215 "adrfam": "ipv4", 00:27:30.215 "trsvcid": "$NVMF_PORT", 00:27:30.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.215 "hdgst": ${hdgst:-false}, 00:27:30.215 "ddgst": ${ddgst:-false} 00:27:30.215 }, 00:27:30.215 "method": "bdev_nvme_attach_controller" 00:27:30.215 } 00:27:30.215 EOF 00:27:30.215 )") 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:30.215 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:30.215 { 00:27:30.215 "params": { 00:27:30.216 "name": "Nvme$subsystem", 00:27:30.216 "trtype": "$TEST_TRANSPORT", 00:27:30.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "$NVMF_PORT", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.216 "hdgst": ${hdgst:-false}, 00:27:30.216 "ddgst": ${ddgst:-false} 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 } 00:27:30.216 EOF 00:27:30.216 )") 00:27:30.216 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:30.216 16:26:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:30.216 16:26:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:30.216 16:26:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme1", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme2", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme3", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme4", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme5", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme6", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme7", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme8", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme9", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 },{ 00:27:30.216 "params": { 00:27:30.216 "name": "Nvme10", 00:27:30.216 "trtype": "tcp", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "adrfam": "ipv4", 00:27:30.216 "trsvcid": "4420", 00:27:30.216 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:30.216 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:30.216 "hdgst": false, 00:27:30.216 "ddgst": false 00:27:30.216 }, 00:27:30.216 "method": "bdev_nvme_attach_controller" 00:27:30.216 }' 00:27:30.216 [2024-07-15 16:26:13.005807] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:30.216 [2024-07-15 16:26:13.005882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:30.216 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.216 [2024-07-15 16:26:13.074097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.216 [2024-07-15 16:26:13.162468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 404464 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:32.116 16:26:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:33.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 404464 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 404278 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.052 "trtype": "$TEST_TRANSPORT", 00:27:33.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.052 "adrfam": "ipv4", 00:27:33.052 "trsvcid": "$NVMF_PORT", 00:27:33.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.052 "hdgst": ${hdgst:-false}, 00:27:33.052 "ddgst": ${ddgst:-false} 00:27:33.052 }, 00:27:33.052 "method": "bdev_nvme_attach_controller" 00:27:33.052 } 00:27:33.052 EOF 00:27:33.052 )") 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.052 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.052 { 00:27:33.052 "params": { 00:27:33.052 "name": "Nvme$subsystem", 00:27:33.053 "trtype": "$TEST_TRANSPORT", 00:27:33.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "$NVMF_PORT", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.053 "hdgst": ${hdgst:-false}, 00:27:33.053 "ddgst": ${ddgst:-false} 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 } 00:27:33.053 EOF 00:27:33.053 )") 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.053 { 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme$subsystem", 00:27:33.053 "trtype": "$TEST_TRANSPORT", 00:27:33.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "$NVMF_PORT", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.053 "hdgst": ${hdgst:-false}, 00:27:33.053 "ddgst": ${ddgst:-false} 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 } 00:27:33.053 EOF 00:27:33.053 )") 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:33.053 16:26:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme1", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme2", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme3", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme4", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme5", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme6", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme7", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme8", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme9", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 },{ 00:27:33.053 "params": { 00:27:33.053 "name": "Nvme10", 00:27:33.053 "trtype": "tcp", 00:27:33.053 "traddr": "10.0.0.2", 00:27:33.053 "adrfam": "ipv4", 00:27:33.053 "trsvcid": "4420", 00:27:33.053 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:33.053 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:33.053 "hdgst": false, 00:27:33.053 "ddgst": false 00:27:33.053 }, 00:27:33.053 "method": "bdev_nvme_attach_controller" 00:27:33.053 }' 00:27:33.312 [2024-07-15 16:26:16.038645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:33.312 [2024-07-15 16:26:16.038752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404881 ] 00:27:33.312 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.312 [2024-07-15 16:26:16.106010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.312 [2024-07-15 16:26:16.196810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.208 Running I/O for 1 seconds... 00:27:36.575 00:27:36.575 Latency(us) 00:27:36.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.575 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme1n1 : 1.19 214.46 13.40 0.00 0.00 295657.43 20680.25 281173.71 00:27:36.575 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme2n1 : 1.21 212.25 13.27 0.00 0.00 294210.37 33787.45 254765.13 00:27:36.575 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme3n1 : 1.15 223.06 13.94 0.00 0.00 275018.15 21748.24 260978.92 00:27:36.575 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme4n1 : 1.07 238.74 14.92 0.00 0.00 251048.96 17379.18 257872.02 00:27:36.575 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme5n1 : 1.20 213.06 13.32 0.00 0.00 279338.48 20777.34 276513.37 00:27:36.575 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme6n1 : 1.22 210.63 13.16 0.00 0.00 278217.77 30874.74 304475.40 00:27:36.575 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme7n1 : 1.21 264.46 16.53 0.00 0.00 215776.98 14175.19 254765.13 00:27:36.575 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme8n1 : 1.18 216.39 13.52 0.00 0.00 260979.11 17185.00 246997.90 00:27:36.575 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme9n1 : 1.22 262.38 16.40 0.00 0.00 211740.44 8883.77 254765.13 00:27:36.575 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:36.575 Verification LBA range: start 0x0 length 0x400 00:27:36.575 Nvme10n1 : 1.22 261.33 16.33 0.00 0.00 210050.01 19223.89 257872.02 00:27:36.575 =================================================================================================================== 00:27:36.575 Total : 2316.75 144.80 0.00 0.00 254086.47 8883.77 304475.40 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.575 rmmod nvme_tcp 00:27:36.575 rmmod nvme_fabrics 00:27:36.575 rmmod nvme_keyring 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 404278 ']' 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 404278 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 404278 ']' 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 404278 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 404278 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 404278' 00:27:36.575 killing process with pid 404278 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 404278 00:27:36.575 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 404278 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.139 16:26:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.046 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:39.046 00:27:39.046 real 0m12.680s 00:27:39.046 user 0m38.666s 00:27:39.046 sys 0m3.177s 00:27:39.046 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:39.046 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:39.046 ************************************ 00:27:39.046 END TEST nvmf_shutdown_tc1 00:27:39.046 ************************************ 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 ************************************ 00:27:39.305 START TEST nvmf_shutdown_tc2 00:27:39.305 ************************************ 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:39.305 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:39.305 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:39.306 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:39.306 Found net devices under 0000:84:00.0: cvl_0_0 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:39.306 Found net devices under 0000:84:00.1: cvl_0_1 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:27:39.306 00:27:39.306 --- 10.0.0.2 ping statistics --- 00:27:39.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.306 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:39.306 00:27:39.306 --- 10.0.0.1 ping statistics --- 00:27:39.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.306 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=405772 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 405772 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 405772 ']' 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.306 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.564 [2024-07-15 16:26:22.299322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:39.564 [2024-07-15 16:26:22.299412] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.564 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.564 [2024-07-15 16:26:22.373511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.564 [2024-07-15 16:26:22.467519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.564 [2024-07-15 16:26:22.467576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.564 [2024-07-15 16:26:22.467604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.564 [2024-07-15 16:26:22.467618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.564 [2024-07-15 16:26:22.467631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.564 [2024-07-15 16:26:22.467716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.564 [2024-07-15 16:26:22.467840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.564 [2024-07-15 16:26:22.467865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:39.564 [2024-07-15 16:26:22.467867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.822 [2024-07-15 16:26:22.608266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:39.822 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:39.823 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:39.823 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.823 16:26:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.823 Malloc1 00:27:39.823 [2024-07-15 16:26:22.683184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.823 Malloc2 00:27:39.823 Malloc3 00:27:39.823 Malloc4 00:27:40.082 Malloc5 00:27:40.082 Malloc6 00:27:40.082 Malloc7 00:27:40.082 Malloc8 00:27:40.082 Malloc9 00:27:40.341 Malloc10 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=405828 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 405828 /var/tmp/bdevperf.sock 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 405828 ']' 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.341 { 00:27:40.341 "params": { 00:27:40.341 "name": "Nvme$subsystem", 00:27:40.341 "trtype": "$TEST_TRANSPORT", 00:27:40.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.341 "adrfam": "ipv4", 00:27:40.341 "trsvcid": "$NVMF_PORT", 00:27:40.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.341 "hdgst": ${hdgst:-false}, 00:27:40.341 "ddgst": ${ddgst:-false} 00:27:40.341 }, 00:27:40.341 "method": "bdev_nvme_attach_controller" 00:27:40.341 } 00:27:40.341 EOF 00:27:40.341 )") 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.341 { 00:27:40.341 "params": { 00:27:40.341 "name": "Nvme$subsystem", 00:27:40.341 "trtype": "$TEST_TRANSPORT", 00:27:40.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.341 "adrfam": "ipv4", 00:27:40.341 "trsvcid": "$NVMF_PORT", 00:27:40.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.341 "hdgst": ${hdgst:-false}, 00:27:40.341 "ddgst": ${ddgst:-false} 00:27:40.341 }, 00:27:40.341 "method": "bdev_nvme_attach_controller" 00:27:40.341 } 00:27:40.341 EOF 00:27:40.341 )") 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.341 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.341 { 00:27:40.341 "params": { 00:27:40.341 "name": "Nvme$subsystem", 00:27:40.341 "trtype": "$TEST_TRANSPORT", 00:27:40.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.341 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:40.342 { 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme$subsystem", 00:27:40.342 "trtype": "$TEST_TRANSPORT", 00:27:40.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "$NVMF_PORT", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.342 "hdgst": ${hdgst:-false}, 00:27:40.342 "ddgst": ${ddgst:-false} 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 } 00:27:40.342 EOF 00:27:40.342 )") 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:40.342 16:26:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme1", 00:27:40.342 "trtype": "tcp", 00:27:40.342 "traddr": "10.0.0.2", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "4420", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.342 "hdgst": false, 00:27:40.342 "ddgst": false 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 },{ 00:27:40.342 "params": { 00:27:40.342 "name": "Nvme2", 00:27:40.342 "trtype": "tcp", 00:27:40.342 "traddr": "10.0.0.2", 00:27:40.342 "adrfam": "ipv4", 00:27:40.342 "trsvcid": "4420", 00:27:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.342 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.342 "hdgst": false, 00:27:40.342 "ddgst": false 00:27:40.342 }, 00:27:40.342 "method": "bdev_nvme_attach_controller" 00:27:40.342 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme3", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme4", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme5", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme6", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme7", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme8", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme9", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 },{ 00:27:40.343 "params": { 00:27:40.343 "name": "Nvme10", 00:27:40.343 "trtype": "tcp", 00:27:40.343 "traddr": "10.0.0.2", 00:27:40.343 "adrfam": "ipv4", 00:27:40.343 "trsvcid": "4420", 00:27:40.343 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:40.343 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:40.343 "hdgst": false, 00:27:40.343 "ddgst": false 00:27:40.343 }, 00:27:40.343 "method": "bdev_nvme_attach_controller" 00:27:40.343 }' 00:27:40.343 [2024-07-15 16:26:23.184194] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:40.343 [2024-07-15 16:26:23.184285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid405828 ] 00:27:40.343 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.343 [2024-07-15 16:26:23.247892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.602 [2024-07-15 16:26:23.337593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.502 Running I/O for 10 seconds... 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:42.502 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.759 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.760 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.760 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:42.760 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:42.760 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 405828 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 405828 ']' 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 405828 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:43.017 16:26:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 405828 00:27:43.275 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:43.275 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:43.275 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 405828' 00:27:43.275 killing process with pid 405828 00:27:43.275 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 405828 00:27:43.275 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 405828 00:27:43.275 Received shutdown signal, test time was about 0.956398 seconds 00:27:43.275 00:27:43.275 Latency(us) 00:27:43.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.275 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme1n1 : 0.95 268.56 16.78 0.00 0.00 234880.76 19029.71 260978.92 00:27:43.275 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme2n1 : 0.96 267.90 16.74 0.00 0.00 230949.93 19515.16 256318.58 00:27:43.275 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme3n1 : 0.91 217.75 13.61 0.00 0.00 275823.44 5024.43 257872.02 00:27:43.275 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme4n1 : 0.95 273.95 17.12 0.00 0.00 216731.94 3762.25 260978.92 00:27:43.275 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme5n1 : 0.92 208.81 13.05 0.00 0.00 278377.05 22233.69 257872.02 00:27:43.275 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme6n1 : 0.93 205.55 12.85 0.00 0.00 277206.16 21554.06 267192.70 00:27:43.275 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme7n1 : 0.93 207.50 12.97 0.00 0.00 268005.83 21068.61 257872.02 00:27:43.275 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme8n1 : 0.91 210.45 13.15 0.00 0.00 258102.11 19903.53 264085.81 00:27:43.275 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme9n1 : 0.94 204.43 12.78 0.00 0.00 261034.03 21359.88 274959.93 00:27:43.275 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:43.275 Verification LBA range: start 0x0 length 0x400 00:27:43.275 Nvme10n1 : 0.94 203.38 12.71 0.00 0.00 256728.94 20388.98 288940.94 00:27:43.275 =================================================================================================================== 00:27:43.275 Total : 2268.28 141.77 0.00 0.00 253209.80 3762.25 288940.94 00:27:43.533 16:26:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 405772 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.469 rmmod nvme_tcp 00:27:44.469 rmmod nvme_fabrics 00:27:44.469 rmmod nvme_keyring 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 405772 ']' 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 405772 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 405772 ']' 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 405772 00:27:44.469 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 405772 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 405772' 00:27:44.470 killing process with pid 405772 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 405772 00:27:44.470 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 405772 00:27:45.038 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.039 16:26:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.976 16:26:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.976 00:27:46.976 real 0m7.879s 00:27:46.976 user 0m24.358s 00:27:46.976 sys 0m1.496s 00:27:46.976 16:26:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:46.976 16:26:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.976 ************************************ 00:27:46.976 END TEST nvmf_shutdown_tc2 00:27:46.976 ************************************ 00:27:47.235 16:26:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:47.235 16:26:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:47.235 16:26:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.235 16:26:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.235 ************************************ 00:27:47.235 START TEST nvmf_shutdown_tc3 00:27:47.235 ************************************ 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:47.235 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:47.235 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:47.235 Found net devices under 0000:84:00.0: cvl_0_0 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:47.235 Found net devices under 0000:84:00.1: cvl_0_1 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.235 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:27:47.236 00:27:47.236 --- 10.0.0.2 ping statistics --- 00:27:47.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.236 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:27:47.236 00:27:47.236 --- 10.0.0.1 ping statistics --- 00:27:47.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.236 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=406854 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 406854 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 406854 ']' 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.236 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.493 [2024-07-15 16:26:30.221065] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:47.493 [2024-07-15 16:26:30.221157] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.493 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.493 [2024-07-15 16:26:30.288635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.493 [2024-07-15 16:26:30.375181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.493 [2024-07-15 16:26:30.375231] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.493 [2024-07-15 16:26:30.375256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.493 [2024-07-15 16:26:30.375282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.493 [2024-07-15 16:26:30.375291] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.493 [2024-07-15 16:26:30.375378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.493 [2024-07-15 16:26:30.375442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.493 [2024-07-15 16:26:30.375509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.493 [2024-07-15 16:26:30.375511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.750 [2024-07-15 16:26:30.511353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.750 16:26:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:47.750 Malloc1 00:27:47.750 [2024-07-15 16:26:30.586058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.750 Malloc2 00:27:47.751 Malloc3 00:27:47.751 Malloc4 00:27:48.008 Malloc5 00:27:48.008 Malloc6 00:27:48.008 Malloc7 00:27:48.008 Malloc8 00:27:48.008 Malloc9 00:27:48.268 Malloc10 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=406925 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 406925 /var/tmp/bdevperf.sock 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 406925 ']' 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.268 { 00:27:48.268 "params": { 00:27:48.268 "name": "Nvme$subsystem", 00:27:48.268 "trtype": "$TEST_TRANSPORT", 00:27:48.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.268 "adrfam": "ipv4", 00:27:48.268 "trsvcid": "$NVMF_PORT", 00:27:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.268 "hdgst": ${hdgst:-false}, 00:27:48.268 "ddgst": ${ddgst:-false} 00:27:48.268 }, 00:27:48.268 "method": "bdev_nvme_attach_controller" 00:27:48.268 } 00:27:48.268 EOF 00:27:48.268 )") 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.268 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.269 { 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme$subsystem", 00:27:48.269 "trtype": "$TEST_TRANSPORT", 00:27:48.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "$NVMF_PORT", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.269 "hdgst": ${hdgst:-false}, 00:27:48.269 "ddgst": ${ddgst:-false} 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 } 00:27:48.269 EOF 00:27:48.269 )") 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.269 { 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme$subsystem", 00:27:48.269 "trtype": "$TEST_TRANSPORT", 00:27:48.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "$NVMF_PORT", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.269 "hdgst": ${hdgst:-false}, 00:27:48.269 "ddgst": ${ddgst:-false} 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 } 00:27:48.269 EOF 00:27:48.269 )") 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.269 { 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme$subsystem", 00:27:48.269 "trtype": "$TEST_TRANSPORT", 00:27:48.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "$NVMF_PORT", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.269 "hdgst": ${hdgst:-false}, 00:27:48.269 "ddgst": ${ddgst:-false} 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 } 00:27:48.269 EOF 00:27:48.269 )") 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.269 { 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme$subsystem", 00:27:48.269 "trtype": "$TEST_TRANSPORT", 00:27:48.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "$NVMF_PORT", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.269 "hdgst": ${hdgst:-false}, 00:27:48.269 "ddgst": ${ddgst:-false} 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 } 00:27:48.269 EOF 00:27:48.269 )") 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:48.269 16:26:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme1", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme2", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme3", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme4", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme5", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme6", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme7", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme8", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme9", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 },{ 00:27:48.269 "params": { 00:27:48.269 "name": "Nvme10", 00:27:48.269 "trtype": "tcp", 00:27:48.269 "traddr": "10.0.0.2", 00:27:48.269 "adrfam": "ipv4", 00:27:48.269 "trsvcid": "4420", 00:27:48.269 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:48.269 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:48.269 "hdgst": false, 00:27:48.269 "ddgst": false 00:27:48.269 }, 00:27:48.269 "method": "bdev_nvme_attach_controller" 00:27:48.269 }' 00:27:48.269 [2024-07-15 16:26:31.080397] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:48.269 [2024-07-15 16:26:31.080482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406925 ] 00:27:48.269 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.269 [2024-07-15 16:26:31.144944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.269 [2024-07-15 16:26:31.233315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.175 Running I/O for 10 seconds... 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.175 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.433 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:50.433 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:50.433 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:50.708 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:50.708 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 406854 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 406854 ']' 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 406854 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 406854 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 406854' 00:27:50.709 killing process with pid 406854 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 406854 00:27:50.709 16:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 406854 00:27:50.709 [2024-07-15 16:26:33.497744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.497991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.498506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa650d0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.709 [2024-07-15 16:26:33.507396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.507867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67ab0 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.509940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65570 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.511522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.511557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.511582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.511595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.710 [2024-07-15 16:26:33.511607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.511619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.511631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.511643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.511656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.511667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65a10 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.512682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa65ed0 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.513998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.514322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66370 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.711 [2024-07-15 16:26:33.515869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.515992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.516394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66810 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.517998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.712 [2024-07-15 16:26:33.518540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.518684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66cb0 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.519993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.713 [2024-07-15 16:26:33.520609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.520621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.520633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67170 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.521994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.522269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67610 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.528290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf00 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.528511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacbb0 is same with the state(5) to be set 00:27:50.714 [2024-07-15 16:26:33.528697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.714 [2024-07-15 16:26:33.528806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.714 [2024-07-15 16:26:33.528819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.528832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfab40 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.528880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.528900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.528915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.528929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.528943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.528956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.528971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.528984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.528997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0420 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8dd0 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4940 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fad0 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9970 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcda280 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.529915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.529977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.529990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.715 [2024-07-15 16:26:33.530016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe77390 is same with the state(5) to be set 00:27:50.715 [2024-07-15 16:26:33.530280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.715 [2024-07-15 16:26:33.530611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.715 [2024-07-15 16:26:33.530625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.530982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.530998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.531971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.531985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.716 [2024-07-15 16:26:33.532001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.716 [2024-07-15 16:26:33.532015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532451] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe72f40 was disconnected and freed. reset controller. 00:27:50.717 [2024-07-15 16:26:33.532510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.532979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.532996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.717 [2024-07-15 16:26:33.533316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.717 [2024-07-15 16:26:33.533330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.533974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.533989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.534538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.534620] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9a010 was disconnected and freed. reset controller. 00:27:50.718 [2024-07-15 16:26:33.535076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.535101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.535123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.535139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.535155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.535170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.535186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.718 [2024-07-15 16:26:33.535200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.718 [2024-07-15 16:26:33.535216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.535983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.535999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.719 [2024-07-15 16:26:33.536557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.719 [2024-07-15 16:26:33.536572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.536980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.537011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.537038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.537053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.537069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.537083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.537101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.537722] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde28c0 was disconnected and freed. reset controller. 00:27:50.720 [2024-07-15 16:26:33.541710] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:50.720 [2024-07-15 16:26:33.541760] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:50.720 [2024-07-15 16:26:33.541803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fad0 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcda280 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1cf00 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacbb0 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfab40 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0420 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf8dd0 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.541993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4940 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.542034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9970 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.542064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77390 (9): Bad file descriptor 00:27:50.720 [2024-07-15 16:26:33.542779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.542981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.542998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.720 [2024-07-15 16:26:33.543409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.720 [2024-07-15 16:26:33.543423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.543974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.543989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.721 [2024-07-15 16:26:33.544586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.721 [2024-07-15 16:26:33.544601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.544830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.544849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92b0 is same with the state(5) to be set 00:27:50.722 [2024-07-15 16:26:33.544928] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde92b0 was disconnected and freed. reset controller. 00:27:50.722 [2024-07-15 16:26:33.545831] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.545916] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.545989] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.546062] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.546134] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.546427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:50.722 [2024-07-15 16:26:33.546634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.722 [2024-07-15 16:26:33.546663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcda280 with addr=10.0.0.2, port=4420 00:27:50.722 [2024-07-15 16:26:33.546681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcda280 is same with the state(5) to be set 00:27:50.722 [2024-07-15 16:26:33.546830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.722 [2024-07-15 16:26:33.546857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe7fad0 with addr=10.0.0.2, port=4420 00:27:50.722 [2024-07-15 16:26:33.546874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fad0 is same with the state(5) to be set 00:27:50.722 [2024-07-15 16:26:33.548079] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:50.722 [2024-07-15 16:26:33.548230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.722 [2024-07-15 16:26:33.548418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.722 [2024-07-15 16:26:33.548446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1cf00 with addr=10.0.0.2, port=4420 00:27:50.722 [2024-07-15 16:26:33.548463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf00 is same with the state(5) to be set 00:27:50.722 [2024-07-15 16:26:33.548486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcda280 (9): Bad file descriptor 00:27:50.722 [2024-07-15 16:26:33.548507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fad0 (9): Bad file descriptor 00:27:50.722 [2024-07-15 16:26:33.548723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.722 [2024-07-15 16:26:33.548757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcacbb0 with addr=10.0.0.2, port=4420 00:27:50.722 [2024-07-15 16:26:33.548775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacbb0 is same with the state(5) to be set 00:27:50.722 [2024-07-15 16:26:33.548794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1cf00 (9): Bad file descriptor 00:27:50.722 [2024-07-15 16:26:33.548813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:50.722 [2024-07-15 16:26:33.548827] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:50.722 [2024-07-15 16:26:33.548844] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:50.722 [2024-07-15 16:26:33.548865] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:50.722 [2024-07-15 16:26:33.548881] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:50.722 [2024-07-15 16:26:33.548895] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:50.722 [2024-07-15 16:26:33.549215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.722 [2024-07-15 16:26:33.549238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.722 [2024-07-15 16:26:33.549255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacbb0 (9): Bad file descriptor 00:27:50.722 [2024-07-15 16:26:33.549272] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:50.722 [2024-07-15 16:26:33.549286] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:50.722 [2024-07-15 16:26:33.549299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:50.722 [2024-07-15 16:26:33.549361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.722 [2024-07-15 16:26:33.549380] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.722 [2024-07-15 16:26:33.549394] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.722 [2024-07-15 16:26:33.549408] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.722 [2024-07-15 16:26:33.549470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.722 [2024-07-15 16:26:33.551910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.551963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.551980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.551997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.722 [2024-07-15 16:26:33.552466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.722 [2024-07-15 16:26:33.552480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.552976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.552991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.553309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.553324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe66be0 is same with the state(5) to be set 00:27:50.723 [2024-07-15 16:26:33.554493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.554983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.723 [2024-07-15 16:26:33.554997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.723 [2024-07-15 16:26:33.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.555986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.555999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.724 [2024-07-15 16:26:33.556354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.724 [2024-07-15 16:26:33.556370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.556384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.556400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.556416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.556432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.556463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.556477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.556493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.556525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca63e0 is same with the state(5) to be set 00:27:50.725 [2024-07-15 16:26:33.557785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.557977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.557993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.725 [2024-07-15 16:26:33.558733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.725 [2024-07-15 16:26:33.558756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.558970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.558985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.559790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.559813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca78e0 is same with the state(5) to be set 00:27:50.726 [2024-07-15 16:26:33.561115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.726 [2024-07-15 16:26:33.561396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.726 [2024-07-15 16:26:33.561412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.561972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.561986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.727 [2024-07-15 16:26:33.562718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.727 [2024-07-15 16:26:33.562733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.562979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.562999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.563014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.563054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.563070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.563084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.563103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.563117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.563133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8e00 is same with the state(5) to be set 00:27:50.728 [2024-07-15 16:26:33.564399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.564973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.564989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.728 [2024-07-15 16:26:33.565286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.728 [2024-07-15 16:26:33.565303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.565971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.565985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.729 [2024-07-15 16:26:33.566371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.729 [2024-07-15 16:26:33.566386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.566400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.566416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.566430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.566444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17789f0 is same with the state(5) to be set 00:27:50.730 [2024-07-15 16:26:33.567708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.567980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.567993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.730 [2024-07-15 16:26:33.568926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.730 [2024-07-15 16:26:33.568940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.568956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.568970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.568985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.569000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.569015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.569029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.569045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.575984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.575998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.731 [2024-07-15 16:26:33.576425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.731 [2024-07-15 16:26:33.576441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde1390 is same with the state(5) to be set 00:27:50.731 [2024-07-15 16:26:33.578108] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:50.731 [2024-07-15 16:26:33.578145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:50.731 [2024-07-15 16:26:33.578166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:50.731 [2024-07-15 16:26:33.578184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:50.731 [2024-07-15 16:26:33.578303] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.731 [2024-07-15 16:26:33.578330] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.731 [2024-07-15 16:26:33.578431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:50.731 task offset: 24576 on job bdev=Nvme3n1 fails 00:27:50.731 00:27:50.731 Latency(us) 00:27:50.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.731 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme1n1 ended in about 0.89 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme1n1 : 0.89 167.04 10.44 72.23 0.00 264380.06 19903.53 262532.36 00:27:50.731 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme2n1 ended in about 0.89 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme2n1 : 0.89 165.85 10.37 49.31 0.00 287078.27 22524.97 257872.02 00:27:50.731 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme3n1 ended in about 0.88 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme3n1 : 0.88 218.90 13.68 72.97 0.00 207487.43 25243.50 248551.35 00:27:50.731 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme4n1 ended in about 0.88 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme4n1 : 0.88 218.63 13.66 72.88 0.00 203159.32 11650.84 254765.13 00:27:50.731 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme5n1 ended in about 0.90 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme5n1 : 0.90 142.91 8.93 71.46 0.00 270667.73 22039.51 265639.25 00:27:50.731 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme6n1 ended in about 0.90 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme6n1 : 0.90 142.39 8.90 71.20 0.00 265820.29 20874.43 262532.36 00:27:50.731 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme7n1 ended in about 0.90 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme7n1 : 0.90 141.87 8.87 70.94 0.00 260965.01 19903.53 256318.58 00:27:50.731 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme8n1 ended in about 0.91 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme8n1 : 0.91 145.77 9.11 70.68 0.00 250975.14 19029.71 242337.56 00:27:50.731 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme9n1 : 0.92 139.81 8.74 69.90 0.00 253623.56 21456.97 276513.37 00:27:50.731 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.731 Job: Nvme10n1 ended in about 0.88 seconds with error 00:27:50.731 Verification LBA range: start 0x0 length 0x400 00:27:50.731 Nvme10n1 : 0.88 145.50 9.09 72.75 0.00 235700.27 12621.75 290494.39 00:27:50.731 =================================================================================================================== 00:27:50.731 Total : 1628.66 101.79 694.30 0.00 247367.50 11650.84 290494.39 00:27:50.731 [2024-07-15 16:26:33.606706] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:50.731 [2024-07-15 16:26:33.606799] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:50.731 [2024-07-15 16:26:33.607152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.731 [2024-07-15 16:26:33.607187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe77390 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.607208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe77390 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.607359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.607384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9970 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.607400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9970 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.607534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.607560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfab40 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.607590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfab40 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.607724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.607758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf4940 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.607775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4940 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.609413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:50.732 [2024-07-15 16:26:33.609442] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:50.732 [2024-07-15 16:26:33.609460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:50.732 [2024-07-15 16:26:33.609477] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.732 [2024-07-15 16:26:33.609708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.609735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd0420 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.609762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0420 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.609951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.609977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf8dd0 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.609994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8dd0 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.610021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77390 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.610050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9970 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.610068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfab40 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.610086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4940 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.610144] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.732 [2024-07-15 16:26:33.610168] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.732 [2024-07-15 16:26:33.610191] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.732 [2024-07-15 16:26:33.610212] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:50.732 [2024-07-15 16:26:33.610885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.610916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe7fad0 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.610933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7fad0 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.611071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.611098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcda280 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.611114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcda280 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.611235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.611261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1cf00 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.611283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf00 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.611439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.732 [2024-07-15 16:26:33.611466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcacbb0 with addr=10.0.0.2, port=4420 00:27:50.732 [2024-07-15 16:26:33.611482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacbb0 is same with the state(5) to be set 00:27:50.732 [2024-07-15 16:26:33.611500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0420 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf8dd0 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.611551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.611577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:50.732 [2024-07-15 16:26:33.611598] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.611613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.611626] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:50.732 [2024-07-15 16:26:33.611643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.611657] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.611671] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:50.732 [2024-07-15 16:26:33.611688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.611703] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.611717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:50.732 [2024-07-15 16:26:33.611825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.611848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.611860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.611872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.611888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fad0 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcda280 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1cf00 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcacbb0 (9): Bad file descriptor 00:27:50.732 [2024-07-15 16:26:33.611959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.611973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.611986] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612009] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.612023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.612037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.612093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.612106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.612118] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.612131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612148] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.612163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.612176] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.612206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.612219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.732 [2024-07-15 16:26:33.612815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.732 [2024-07-15 16:26:33.612829] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.732 [2024-07-15 16:26:33.612871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.612889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.612901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.732 [2024-07-15 16:26:33.612913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:51.297 16:26:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:51.297 16:26:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 406925 00:27:52.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (406925) - No such process 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.231 rmmod nvme_tcp 00:27:52.231 rmmod nvme_fabrics 00:27:52.231 rmmod nvme_keyring 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.231 16:26:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.764 00:27:54.764 real 0m7.153s 00:27:54.764 user 0m16.931s 00:27:54.764 sys 0m1.383s 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.764 ************************************ 00:27:54.764 END TEST nvmf_shutdown_tc3 00:27:54.764 ************************************ 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:54.764 00:27:54.764 real 0m27.925s 00:27:54.764 user 1m20.045s 00:27:54.764 sys 0m6.194s 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.764 16:26:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:54.764 ************************************ 00:27:54.764 END TEST nvmf_shutdown 00:27:54.764 ************************************ 00:27:54.764 16:26:37 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.764 16:26:37 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.764 16:26:37 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:54.764 16:26:37 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:54.764 16:26:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.764 ************************************ 00:27:54.764 START TEST nvmf_multicontroller 00:27:54.764 ************************************ 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:54.764 * Looking for test storage... 00:27:54.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.764 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.765 16:26:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:56.673 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:56.674 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:56.674 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:56.674 Found net devices under 0000:84:00.0: cvl_0_0 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:56.674 Found net devices under 0000:84:00.1: cvl_0_1 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.674 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:27:56.675 00:27:56.675 --- 10.0.0.2 ping statistics --- 00:27:56.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.675 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:56.675 00:27:56.675 --- 10.0.0.1 ping statistics --- 00:27:56.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.675 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=409444 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 409444 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 409444 ']' 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:56.675 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.675 [2024-07-15 16:26:39.470187] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:56.675 [2024-07-15 16:26:39.470260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.675 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.675 [2024-07-15 16:26:39.537998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.675 [2024-07-15 16:26:39.628579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.675 [2024-07-15 16:26:39.628642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.675 [2024-07-15 16:26:39.628658] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.675 [2024-07-15 16:26:39.628679] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.675 [2024-07-15 16:26:39.628690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.675 [2024-07-15 16:26:39.628768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.675 [2024-07-15 16:26:39.628884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.675 [2024-07-15 16:26:39.628887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 [2024-07-15 16:26:39.777408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 Malloc0 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 [2024-07-15 16:26:39.840771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 [2024-07-15 16:26:39.848650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 Malloc1 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=409473 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 409473 /var/tmp/bdevperf.sock 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 409473 ']' 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:56.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:56.935 16:26:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.501 NVMe0n1 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.501 1 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.501 request: 00:27:57.501 { 00:27:57.501 "name": "NVMe0", 00:27:57.501 "trtype": "tcp", 00:27:57.501 "traddr": "10.0.0.2", 00:27:57.501 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:57.501 "hostaddr": "10.0.0.2", 00:27:57.501 "hostsvcid": "60000", 00:27:57.501 "adrfam": "ipv4", 00:27:57.501 "trsvcid": "4420", 00:27:57.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.501 "method": "bdev_nvme_attach_controller", 00:27:57.501 "req_id": 1 00:27:57.501 } 00:27:57.501 Got JSON-RPC error response 00:27:57.501 response: 00:27:57.501 { 00:27:57.501 "code": -114, 00:27:57.501 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:57.501 } 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:57.501 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.502 request: 00:27:57.502 { 00:27:57.502 "name": "NVMe0", 00:27:57.502 "trtype": "tcp", 00:27:57.502 "traddr": "10.0.0.2", 00:27:57.502 "hostaddr": "10.0.0.2", 00:27:57.502 "hostsvcid": "60000", 00:27:57.502 "adrfam": "ipv4", 00:27:57.502 "trsvcid": "4420", 00:27:57.502 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:57.502 "method": "bdev_nvme_attach_controller", 00:27:57.502 "req_id": 1 00:27:57.502 } 00:27:57.502 Got JSON-RPC error response 00:27:57.502 response: 00:27:57.502 { 00:27:57.502 "code": -114, 00:27:57.502 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:57.502 } 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.502 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.760 request: 00:27:57.760 { 00:27:57.760 "name": "NVMe0", 00:27:57.760 "trtype": "tcp", 00:27:57.760 "traddr": "10.0.0.2", 00:27:57.760 "hostaddr": "10.0.0.2", 00:27:57.760 "hostsvcid": "60000", 00:27:57.760 "adrfam": "ipv4", 00:27:57.760 "trsvcid": "4420", 00:27:57.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.761 "multipath": "disable", 00:27:57.761 "method": "bdev_nvme_attach_controller", 00:27:57.761 "req_id": 1 00:27:57.761 } 00:27:57.761 Got JSON-RPC error response 00:27:57.761 response: 00:27:57.761 { 00:27:57.761 "code": -114, 00:27:57.761 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:57.761 } 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.761 request: 00:27:57.761 { 00:27:57.761 "name": "NVMe0", 00:27:57.761 "trtype": "tcp", 00:27:57.761 "traddr": "10.0.0.2", 00:27:57.761 "hostaddr": "10.0.0.2", 00:27:57.761 "hostsvcid": "60000", 00:27:57.761 "adrfam": "ipv4", 00:27:57.761 "trsvcid": "4420", 00:27:57.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.761 "multipath": "failover", 00:27:57.761 "method": "bdev_nvme_attach_controller", 00:27:57.761 "req_id": 1 00:27:57.761 } 00:27:57.761 Got JSON-RPC error response 00:27:57.761 response: 00:27:57.761 { 00:27:57.761 "code": -114, 00:27:57.761 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:57.761 } 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.761 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.761 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:57.761 16:26:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:59.138 0 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 409473 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 409473 ']' 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 409473 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 409473 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 409473' 00:27:59.138 killing process with pid 409473 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 409473 00:27:59.138 16:26:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 409473 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:27:59.138 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:27:59.138 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:59.138 [2024-07-15 16:26:39.951859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:59.138 [2024-07-15 16:26:39.951936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409473 ] 00:27:59.138 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.138 [2024-07-15 16:26:40.012152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.138 [2024-07-15 16:26:40.105803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.138 [2024-07-15 16:26:40.679547] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 28624224-3241-403e-b529-8e66eeece179 already exists 00:27:59.138 [2024-07-15 16:26:40.679586] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:28624224-3241-403e-b529-8e66eeece179 alias for bdev NVMe1n1 00:27:59.138 [2024-07-15 16:26:40.679603] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:59.138 Running I/O for 1 seconds... 00:27:59.138 00:27:59.138 Latency(us) 00:27:59.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.138 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:59.138 NVMe0n1 : 1.00 19635.21 76.70 0.00 0.00 6501.06 2148.12 11602.30 00:27:59.138 =================================================================================================================== 00:27:59.138 Total : 19635.21 76.70 0.00 0.00 6501.06 2148.12 11602.30 00:27:59.138 Received shutdown signal, test time was about 1.000000 seconds 00:27:59.138 00:27:59.139 Latency(us) 00:27:59.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.139 =================================================================================================================== 00:27:59.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.139 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.139 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.139 rmmod nvme_tcp 00:27:59.139 rmmod nvme_fabrics 00:27:59.396 rmmod nvme_keyring 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 409444 ']' 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 409444 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 409444 ']' 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 409444 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 409444 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:59.396 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 409444' 00:27:59.397 killing process with pid 409444 00:27:59.397 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 409444 00:27:59.397 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 409444 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.654 16:26:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.560 16:26:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.560 00:28:01.560 real 0m7.209s 00:28:01.560 user 0m11.198s 00:28:01.560 sys 0m2.216s 00:28:01.560 16:26:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:01.560 16:26:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.560 ************************************ 00:28:01.560 END TEST nvmf_multicontroller 00:28:01.560 ************************************ 00:28:01.560 16:26:44 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:01.560 16:26:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:01.560 16:26:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:01.560 16:26:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:01.560 ************************************ 00:28:01.560 START TEST nvmf_aer 00:28:01.560 ************************************ 00:28:01.560 16:26:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:01.820 * Looking for test storage... 00:28:01.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.820 16:26:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.726 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.726 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.726 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.726 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:03.727 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:03.727 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:03.727 Found net devices under 0000:84:00.0: cvl_0_0 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:03.727 Found net devices under 0000:84:00.1: cvl_0_1 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:03.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:28:03.727 00:28:03.727 --- 10.0.0.2 ping statistics --- 00:28:03.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.727 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:28:03.727 00:28:03.727 --- 10.0.0.1 ping statistics --- 00:28:03.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.727 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=411698 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 411698 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 411698 ']' 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:03.727 16:26:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:03.987 [2024-07-15 16:26:46.744872] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:03.987 [2024-07-15 16:26:46.744959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.987 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.987 [2024-07-15 16:26:46.814989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.987 [2024-07-15 16:26:46.905693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.987 [2024-07-15 16:26:46.905771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.987 [2024-07-15 16:26:46.905786] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.987 [2024-07-15 16:26:46.905797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.987 [2024-07-15 16:26:46.905807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.987 [2024-07-15 16:26:46.905858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.987 [2024-07-15 16:26:46.905883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.987 [2024-07-15 16:26:46.905939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.987 [2024-07-15 16:26:46.905942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 [2024-07-15 16:26:47.055291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 Malloc0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 [2024-07-15 16:26:47.106337] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.249 [ 00:28:04.249 { 00:28:04.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:04.249 "subtype": "Discovery", 00:28:04.249 "listen_addresses": [], 00:28:04.249 "allow_any_host": true, 00:28:04.249 "hosts": [] 00:28:04.249 }, 00:28:04.249 { 00:28:04.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.249 "subtype": "NVMe", 00:28:04.249 "listen_addresses": [ 00:28:04.249 { 00:28:04.249 "trtype": "TCP", 00:28:04.249 "adrfam": "IPv4", 00:28:04.249 "traddr": "10.0.0.2", 00:28:04.249 "trsvcid": "4420" 00:28:04.249 } 00:28:04.249 ], 00:28:04.249 "allow_any_host": true, 00:28:04.249 "hosts": [], 00:28:04.249 "serial_number": "SPDK00000000000001", 00:28:04.249 "model_number": "SPDK bdev Controller", 00:28:04.249 "max_namespaces": 2, 00:28:04.249 "min_cntlid": 1, 00:28:04.249 "max_cntlid": 65519, 00:28:04.249 "namespaces": [ 00:28:04.249 { 00:28:04.249 "nsid": 1, 00:28:04.249 "bdev_name": "Malloc0", 00:28:04.249 "name": "Malloc0", 00:28:04.249 "nguid": "D99FB32072964464A42CFCD4A76A5755", 00:28:04.249 "uuid": "d99fb320-7296-4464-a42c-fcd4a76a5755" 00:28:04.249 } 00:28:04.249 ] 00:28:04.249 } 00:28:04.249 ] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=411765 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:04.249 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:04.249 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.509 Malloc1 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.509 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.770 Asynchronous Event Request test 00:28:04.770 Attaching to 10.0.0.2 00:28:04.770 Attached to 10.0.0.2 00:28:04.770 Registering asynchronous event callbacks... 00:28:04.770 Starting namespace attribute notice tests for all controllers... 00:28:04.770 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:04.770 aer_cb - Changed Namespace 00:28:04.770 Cleaning up... 00:28:04.770 [ 00:28:04.770 { 00:28:04.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:04.770 "subtype": "Discovery", 00:28:04.770 "listen_addresses": [], 00:28:04.770 "allow_any_host": true, 00:28:04.770 "hosts": [] 00:28:04.770 }, 00:28:04.770 { 00:28:04.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.770 "subtype": "NVMe", 00:28:04.770 "listen_addresses": [ 00:28:04.770 { 00:28:04.770 "trtype": "TCP", 00:28:04.770 "adrfam": "IPv4", 00:28:04.770 "traddr": "10.0.0.2", 00:28:04.770 "trsvcid": "4420" 00:28:04.770 } 00:28:04.770 ], 00:28:04.770 "allow_any_host": true, 00:28:04.770 "hosts": [], 00:28:04.770 "serial_number": "SPDK00000000000001", 00:28:04.770 "model_number": "SPDK bdev Controller", 00:28:04.770 "max_namespaces": 2, 00:28:04.770 "min_cntlid": 1, 00:28:04.770 "max_cntlid": 65519, 00:28:04.770 "namespaces": [ 00:28:04.770 { 00:28:04.770 "nsid": 1, 00:28:04.770 "bdev_name": "Malloc0", 00:28:04.770 "name": "Malloc0", 00:28:04.770 "nguid": "D99FB32072964464A42CFCD4A76A5755", 00:28:04.770 "uuid": "d99fb320-7296-4464-a42c-fcd4a76a5755" 00:28:04.770 }, 00:28:04.770 { 00:28:04.770 "nsid": 2, 00:28:04.770 "bdev_name": "Malloc1", 00:28:04.770 "name": "Malloc1", 00:28:04.770 "nguid": "9AE1A9EBB2BF47DA859A8ED7D8970DA5", 00:28:04.770 "uuid": "9ae1a9eb-b2bf-47da-859a-8ed7d8970da5" 00:28:04.770 } 00:28:04.770 ] 00:28:04.770 } 00:28:04.770 ] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 411765 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.770 rmmod nvme_tcp 00:28:04.770 rmmod nvme_fabrics 00:28:04.770 rmmod nvme_keyring 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 411698 ']' 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 411698 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 411698 ']' 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 411698 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 411698 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 411698' 00:28:04.770 killing process with pid 411698 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 411698 00:28:04.770 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 411698 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.030 16:26:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.562 16:26:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.562 00:28:07.562 real 0m5.416s 00:28:07.562 user 0m4.515s 00:28:07.562 sys 0m1.925s 00:28:07.562 16:26:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:07.562 16:26:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:07.562 ************************************ 00:28:07.562 END TEST nvmf_aer 00:28:07.562 ************************************ 00:28:07.562 16:26:49 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:07.562 16:26:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:07.562 16:26:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.562 16:26:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.562 ************************************ 00:28:07.562 START TEST nvmf_async_init 00:28:07.562 ************************************ 00:28:07.562 16:26:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:07.562 * Looking for test storage... 00:28:07.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=830e9e369785469c8150c524f49a649a 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:07.562 16:26:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:09.466 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:09.466 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:09.466 Found net devices under 0000:84:00.0: cvl_0_0 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:09.466 Found net devices under 0000:84:00.1: cvl_0_1 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.466 16:26:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.466 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.466 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.466 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.466 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.466 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:28:09.467 00:28:09.467 --- 10.0.0.2 ping statistics --- 00:28:09.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.467 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:28:09.467 00:28:09.467 --- 10.0.0.1 ping statistics --- 00:28:09.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.467 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=413796 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 413796 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 413796 ']' 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 [2024-07-15 16:26:52.130354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:09.467 [2024-07-15 16:26:52.130439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.467 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.467 [2024-07-15 16:26:52.194775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.467 [2024-07-15 16:26:52.277914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.467 [2024-07-15 16:26:52.277971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.467 [2024-07-15 16:26:52.277996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.467 [2024-07-15 16:26:52.278007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.467 [2024-07-15 16:26:52.278017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.467 [2024-07-15 16:26:52.278058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 [2024-07-15 16:26:52.406931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 null0 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 830e9e369785469c8150c524f49a649a 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.467 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.726 [2024-07-15 16:26:52.447234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.726 nvme0n1 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.726 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.726 [ 00:28:09.726 { 00:28:09.726 "name": "nvme0n1", 00:28:09.726 "aliases": [ 00:28:09.726 "830e9e36-9785-469c-8150-c524f49a649a" 00:28:09.726 ], 00:28:09.726 "product_name": "NVMe disk", 00:28:09.726 "block_size": 512, 00:28:09.726 "num_blocks": 2097152, 00:28:09.726 "uuid": "830e9e36-9785-469c-8150-c524f49a649a", 00:28:09.726 "assigned_rate_limits": { 00:28:09.726 "rw_ios_per_sec": 0, 00:28:09.726 "rw_mbytes_per_sec": 0, 00:28:09.726 "r_mbytes_per_sec": 0, 00:28:09.726 "w_mbytes_per_sec": 0 00:28:09.726 }, 00:28:09.726 "claimed": false, 00:28:09.726 "zoned": false, 00:28:09.726 "supported_io_types": { 00:28:09.726 "read": true, 00:28:09.726 "write": true, 00:28:09.726 "unmap": false, 00:28:09.726 "write_zeroes": true, 00:28:09.726 "flush": true, 00:28:09.726 "reset": true, 00:28:09.726 "compare": true, 00:28:09.726 "compare_and_write": true, 00:28:09.726 "abort": true, 00:28:09.727 "nvme_admin": true, 00:28:09.727 "nvme_io": true 00:28:09.727 }, 00:28:09.727 "memory_domains": [ 00:28:09.727 { 00:28:09.727 "dma_device_id": "system", 00:28:09.727 "dma_device_type": 1 00:28:09.727 } 00:28:09.727 ], 00:28:09.727 "driver_specific": { 00:28:09.727 "nvme": [ 00:28:09.727 { 00:28:09.727 "trid": { 00:28:09.727 "trtype": "TCP", 00:28:09.727 "adrfam": "IPv4", 00:28:09.727 "traddr": "10.0.0.2", 00:28:09.727 "trsvcid": "4420", 00:28:09.727 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:09.727 }, 00:28:09.727 "ctrlr_data": { 00:28:09.727 "cntlid": 1, 00:28:09.727 "vendor_id": "0x8086", 00:28:09.727 "model_number": "SPDK bdev Controller", 00:28:09.727 "serial_number": "00000000000000000000", 00:28:09.727 "firmware_revision": "24.05.1", 00:28:09.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.727 "oacs": { 00:28:09.727 "security": 0, 00:28:09.727 "format": 0, 00:28:09.727 "firmware": 0, 00:28:09.727 "ns_manage": 0 00:28:09.727 }, 00:28:09.727 "multi_ctrlr": true, 00:28:09.727 "ana_reporting": false 00:28:09.727 }, 00:28:09.727 "vs": { 00:28:09.727 "nvme_version": "1.3" 00:28:09.727 }, 00:28:09.727 "ns_data": { 00:28:09.727 "id": 1, 00:28:09.727 "can_share": true 00:28:09.727 } 00:28:09.727 } 00:28:09.727 ], 00:28:09.727 "mp_policy": "active_passive" 00:28:09.727 } 00:28:09.727 } 00:28:09.727 ] 00:28:09.727 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.727 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:09.727 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.727 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.727 [2024-07-15 16:26:52.699804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:09.727 [2024-07-15 16:26:52.699880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa95df0 (9): Bad file descriptor 00:28:09.987 [2024-07-15 16:26:52.841909] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.987 [ 00:28:09.987 { 00:28:09.987 "name": "nvme0n1", 00:28:09.987 "aliases": [ 00:28:09.987 "830e9e36-9785-469c-8150-c524f49a649a" 00:28:09.987 ], 00:28:09.987 "product_name": "NVMe disk", 00:28:09.987 "block_size": 512, 00:28:09.987 "num_blocks": 2097152, 00:28:09.987 "uuid": "830e9e36-9785-469c-8150-c524f49a649a", 00:28:09.987 "assigned_rate_limits": { 00:28:09.987 "rw_ios_per_sec": 0, 00:28:09.987 "rw_mbytes_per_sec": 0, 00:28:09.987 "r_mbytes_per_sec": 0, 00:28:09.987 "w_mbytes_per_sec": 0 00:28:09.987 }, 00:28:09.987 "claimed": false, 00:28:09.987 "zoned": false, 00:28:09.987 "supported_io_types": { 00:28:09.987 "read": true, 00:28:09.987 "write": true, 00:28:09.987 "unmap": false, 00:28:09.987 "write_zeroes": true, 00:28:09.987 "flush": true, 00:28:09.987 "reset": true, 00:28:09.987 "compare": true, 00:28:09.987 "compare_and_write": true, 00:28:09.987 "abort": true, 00:28:09.987 "nvme_admin": true, 00:28:09.987 "nvme_io": true 00:28:09.987 }, 00:28:09.987 "memory_domains": [ 00:28:09.987 { 00:28:09.987 "dma_device_id": "system", 00:28:09.987 "dma_device_type": 1 00:28:09.987 } 00:28:09.987 ], 00:28:09.987 "driver_specific": { 00:28:09.987 "nvme": [ 00:28:09.987 { 00:28:09.987 "trid": { 00:28:09.987 "trtype": "TCP", 00:28:09.987 "adrfam": "IPv4", 00:28:09.987 "traddr": "10.0.0.2", 00:28:09.987 "trsvcid": "4420", 00:28:09.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:09.987 }, 00:28:09.987 "ctrlr_data": { 00:28:09.987 "cntlid": 2, 00:28:09.987 "vendor_id": "0x8086", 00:28:09.987 "model_number": "SPDK bdev Controller", 00:28:09.987 "serial_number": "00000000000000000000", 00:28:09.987 "firmware_revision": "24.05.1", 00:28:09.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.987 "oacs": { 00:28:09.987 "security": 0, 00:28:09.987 "format": 0, 00:28:09.987 "firmware": 0, 00:28:09.987 "ns_manage": 0 00:28:09.987 }, 00:28:09.987 "multi_ctrlr": true, 00:28:09.987 "ana_reporting": false 00:28:09.987 }, 00:28:09.987 "vs": { 00:28:09.987 "nvme_version": "1.3" 00:28:09.987 }, 00:28:09.987 "ns_data": { 00:28:09.987 "id": 1, 00:28:09.987 "can_share": true 00:28:09.987 } 00:28:09.987 } 00:28:09.987 ], 00:28:09.987 "mp_policy": "active_passive" 00:28:09.987 } 00:28:09.987 } 00:28:09.987 ] 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.987 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vaut4gtdIT 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vaut4gtdIT 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.988 [2024-07-15 16:26:52.892446] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:09.988 [2024-07-15 16:26:52.892591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaut4gtdIT 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.988 [2024-07-15 16:26:52.900463] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaut4gtdIT 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.988 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:09.988 [2024-07-15 16:26:52.908482] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:09.988 [2024-07-15 16:26:52.908558] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:10.256 nvme0n1 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.256 [ 00:28:10.256 { 00:28:10.256 "name": "nvme0n1", 00:28:10.256 "aliases": [ 00:28:10.256 "830e9e36-9785-469c-8150-c524f49a649a" 00:28:10.256 ], 00:28:10.256 "product_name": "NVMe disk", 00:28:10.256 "block_size": 512, 00:28:10.256 "num_blocks": 2097152, 00:28:10.256 "uuid": "830e9e36-9785-469c-8150-c524f49a649a", 00:28:10.256 "assigned_rate_limits": { 00:28:10.256 "rw_ios_per_sec": 0, 00:28:10.256 "rw_mbytes_per_sec": 0, 00:28:10.256 "r_mbytes_per_sec": 0, 00:28:10.256 "w_mbytes_per_sec": 0 00:28:10.256 }, 00:28:10.256 "claimed": false, 00:28:10.256 "zoned": false, 00:28:10.256 "supported_io_types": { 00:28:10.256 "read": true, 00:28:10.256 "write": true, 00:28:10.256 "unmap": false, 00:28:10.256 "write_zeroes": true, 00:28:10.256 "flush": true, 00:28:10.256 "reset": true, 00:28:10.256 "compare": true, 00:28:10.256 "compare_and_write": true, 00:28:10.256 "abort": true, 00:28:10.256 "nvme_admin": true, 00:28:10.256 "nvme_io": true 00:28:10.256 }, 00:28:10.256 "memory_domains": [ 00:28:10.256 { 00:28:10.256 "dma_device_id": "system", 00:28:10.256 "dma_device_type": 1 00:28:10.256 } 00:28:10.256 ], 00:28:10.256 "driver_specific": { 00:28:10.256 "nvme": [ 00:28:10.256 { 00:28:10.256 "trid": { 00:28:10.256 "trtype": "TCP", 00:28:10.256 "adrfam": "IPv4", 00:28:10.256 "traddr": "10.0.0.2", 00:28:10.256 "trsvcid": "4421", 00:28:10.256 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.256 }, 00:28:10.256 "ctrlr_data": { 00:28:10.256 "cntlid": 3, 00:28:10.256 "vendor_id": "0x8086", 00:28:10.256 "model_number": "SPDK bdev Controller", 00:28:10.256 "serial_number": "00000000000000000000", 00:28:10.256 "firmware_revision": "24.05.1", 00:28:10.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.256 "oacs": { 00:28:10.256 "security": 0, 00:28:10.256 "format": 0, 00:28:10.256 "firmware": 0, 00:28:10.256 "ns_manage": 0 00:28:10.256 }, 00:28:10.256 "multi_ctrlr": true, 00:28:10.256 "ana_reporting": false 00:28:10.256 }, 00:28:10.256 "vs": { 00:28:10.256 "nvme_version": "1.3" 00:28:10.256 }, 00:28:10.256 "ns_data": { 00:28:10.256 "id": 1, 00:28:10.256 "can_share": true 00:28:10.256 } 00:28:10.256 } 00:28:10.256 ], 00:28:10.256 "mp_policy": "active_passive" 00:28:10.256 } 00:28:10.256 } 00:28:10.256 ] 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.256 16:26:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.vaut4gtdIT 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.257 rmmod nvme_tcp 00:28:10.257 rmmod nvme_fabrics 00:28:10.257 rmmod nvme_keyring 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 413796 ']' 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 413796 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 413796 ']' 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 413796 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 413796 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 413796' 00:28:10.257 killing process with pid 413796 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 413796 00:28:10.257 [2024-07-15 16:26:53.081860] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:10.257 [2024-07-15 16:26:53.081899] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:10.257 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 413796 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.516 16:26:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.507 16:26:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.507 00:28:12.507 real 0m5.362s 00:28:12.507 user 0m1.964s 00:28:12.507 sys 0m1.758s 00:28:12.507 16:26:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.507 16:26:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:12.507 ************************************ 00:28:12.507 END TEST nvmf_async_init 00:28:12.507 ************************************ 00:28:12.507 16:26:55 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:12.507 16:26:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:12.508 16:26:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.508 16:26:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.508 ************************************ 00:28:12.508 START TEST dma 00:28:12.508 ************************************ 00:28:12.508 16:26:55 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:12.508 * Looking for test storage... 00:28:12.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.508 16:26:55 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.508 16:26:55 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.508 16:26:55 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.508 16:26:55 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.508 16:26:55 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.508 16:26:55 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.508 16:26:55 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.508 16:26:55 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:12.508 16:26:55 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.508 16:26:55 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.508 16:26:55 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:12.508 16:26:55 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:12.508 00:28:12.508 real 0m0.072s 00:28:12.508 user 0m0.042s 00:28:12.508 sys 0m0.035s 00:28:12.508 16:26:55 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.508 16:26:55 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:12.508 ************************************ 00:28:12.508 END TEST dma 00:28:12.508 ************************************ 00:28:12.767 16:26:55 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:12.767 16:26:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:12.767 16:26:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.767 16:26:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.767 ************************************ 00:28:12.767 START TEST nvmf_identify 00:28:12.767 ************************************ 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:12.767 * Looking for test storage... 00:28:12.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.767 16:26:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:14.674 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:14.674 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:14.674 Found net devices under 0000:84:00.0: cvl_0_0 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:14.674 Found net devices under 0000:84:00.1: cvl_0_1 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.674 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:28:14.675 00:28:14.675 --- 10.0.0.2 ping statistics --- 00:28:14.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.675 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:14.675 00:28:14.675 --- 10.0.0.1 ping statistics --- 00:28:14.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.675 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=415930 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 415930 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 415930 ']' 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:14.675 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.675 [2024-07-15 16:26:57.557970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:14.675 [2024-07-15 16:26:57.558060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.675 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.675 [2024-07-15 16:26:57.632964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.934 [2024-07-15 16:26:57.728235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.934 [2024-07-15 16:26:57.728299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.934 [2024-07-15 16:26:57.728339] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.934 [2024-07-15 16:26:57.728353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.934 [2024-07-15 16:26:57.728365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.934 [2024-07-15 16:26:57.728424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.934 [2024-07-15 16:26:57.728476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.934 [2024-07-15 16:26:57.728590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.934 [2024-07-15 16:26:57.728593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.934 [2024-07-15 16:26:57.844210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.934 Malloc0 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.934 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.197 [2024-07-15 16:26:57.915122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.197 [ 00:28:15.197 { 00:28:15.197 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.197 "subtype": "Discovery", 00:28:15.197 "listen_addresses": [ 00:28:15.197 { 00:28:15.197 "trtype": "TCP", 00:28:15.197 "adrfam": "IPv4", 00:28:15.197 "traddr": "10.0.0.2", 00:28:15.197 "trsvcid": "4420" 00:28:15.197 } 00:28:15.197 ], 00:28:15.197 "allow_any_host": true, 00:28:15.197 "hosts": [] 00:28:15.197 }, 00:28:15.197 { 00:28:15.197 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.197 "subtype": "NVMe", 00:28:15.197 "listen_addresses": [ 00:28:15.197 { 00:28:15.197 "trtype": "TCP", 00:28:15.197 "adrfam": "IPv4", 00:28:15.197 "traddr": "10.0.0.2", 00:28:15.197 "trsvcid": "4420" 00:28:15.197 } 00:28:15.197 ], 00:28:15.197 "allow_any_host": true, 00:28:15.197 "hosts": [], 00:28:15.197 "serial_number": "SPDK00000000000001", 00:28:15.197 "model_number": "SPDK bdev Controller", 00:28:15.197 "max_namespaces": 32, 00:28:15.197 "min_cntlid": 1, 00:28:15.197 "max_cntlid": 65519, 00:28:15.197 "namespaces": [ 00:28:15.197 { 00:28:15.197 "nsid": 1, 00:28:15.197 "bdev_name": "Malloc0", 00:28:15.197 "name": "Malloc0", 00:28:15.197 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:15.197 "eui64": "ABCDEF0123456789", 00:28:15.197 "uuid": "65faaa7f-432f-40e8-896c-2eeaa62cf122" 00:28:15.197 } 00:28:15.197 ] 00:28:15.197 } 00:28:15.197 ] 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.197 16:26:57 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:15.197 [2024-07-15 16:26:57.952069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:15.197 [2024-07-15 16:26:57.952106] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415963 ] 00:28:15.197 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.197 [2024-07-15 16:26:57.984019] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:15.197 [2024-07-15 16:26:57.984091] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:15.197 [2024-07-15 16:26:57.984101] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:15.197 [2024-07-15 16:26:57.984115] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:15.197 [2024-07-15 16:26:57.984128] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:15.197 [2024-07-15 16:26:57.987790] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:15.197 [2024-07-15 16:26:57.987843] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dec980 0 00:28:15.197 [2024-07-15 16:26:57.995764] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:15.197 [2024-07-15 16:26:57.995786] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:15.197 [2024-07-15 16:26:57.995795] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:15.197 [2024-07-15 16:26:57.995801] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:15.197 [2024-07-15 16:26:57.995855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:57.995867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:57.995874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.197 [2024-07-15 16:26:57.995897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:15.197 [2024-07-15 16:26:57.995924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.197 [2024-07-15 16:26:58.003756] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.197 [2024-07-15 16:26:58.003774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.197 [2024-07-15 16:26:58.003781] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.003790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.197 [2024-07-15 16:26:58.003811] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:15.197 [2024-07-15 16:26:58.003822] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:15.197 [2024-07-15 16:26:58.003831] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:15.197 [2024-07-15 16:26:58.003853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.003861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.003868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.197 [2024-07-15 16:26:58.003879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.197 [2024-07-15 16:26:58.003902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.197 [2024-07-15 16:26:58.004077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.197 [2024-07-15 16:26:58.004092] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.197 [2024-07-15 16:26:58.004098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.004104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.197 [2024-07-15 16:26:58.004118] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:15.197 [2024-07-15 16:26:58.004131] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:15.197 [2024-07-15 16:26:58.004143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.004150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.004156] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.197 [2024-07-15 16:26:58.004166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.197 [2024-07-15 16:26:58.004187] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.197 [2024-07-15 16:26:58.004327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.197 [2024-07-15 16:26:58.004339] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.197 [2024-07-15 16:26:58.004345] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.197 [2024-07-15 16:26:58.004351] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.197 [2024-07-15 16:26:58.004361] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:15.197 [2024-07-15 16:26:58.004374] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.004385] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.004411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.004431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.004522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.004536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.004542] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.004557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.004573] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.004597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.004616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.004725] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.004744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.004768] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.004784] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:15.198 [2024-07-15 16:26:58.004793] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.004806] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.004915] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:15.198 [2024-07-15 16:26:58.004923] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.004938] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004945] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.004951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.004962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.004982] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.005174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.005186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.005192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.005207] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:15.198 [2024-07-15 16:26:58.005222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.005250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.005270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.005360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.005374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.005380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.005395] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:15.198 [2024-07-15 16:26:58.005403] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.005415] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:15.198 [2024-07-15 16:26:58.005428] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.005445] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005453] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.005463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.005483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.005603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.198 [2024-07-15 16:26:58.005617] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.198 [2024-07-15 16:26:58.005624] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005630] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dec980): datao=0, datal=4096, cccid=0 00:28:15.198 [2024-07-15 16:26:58.005637] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e544c0) on tqpair(0x1dec980): expected_datao=0, payload_size=4096 00:28:15.198 [2024-07-15 16:26:58.005644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005679] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005699] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.005806] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.005812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005819] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.005837] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:15.198 [2024-07-15 16:26:58.005846] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:15.198 [2024-07-15 16:26:58.005854] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:15.198 [2024-07-15 16:26:58.005862] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:15.198 [2024-07-15 16:26:58.005869] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:15.198 [2024-07-15 16:26:58.005880] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.005895] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.005907] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.005920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.005931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:15.198 [2024-07-15 16:26:58.005953] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.006129] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.198 [2024-07-15 16:26:58.006144] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.198 [2024-07-15 16:26:58.006150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e544c0) on tqpair=0x1dec980 00:28:15.198 [2024-07-15 16:26:58.006171] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006184] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.006194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.198 [2024-07-15 16:26:58.006203] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006214] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.006222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.198 [2024-07-15 16:26:58.006231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.006251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.198 [2024-07-15 16:26:58.006260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.006280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.198 [2024-07-15 16:26:58.006288] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.006306] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:15.198 [2024-07-15 16:26:58.006317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.198 [2024-07-15 16:26:58.006324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dec980) 00:28:15.198 [2024-07-15 16:26:58.006334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.198 [2024-07-15 16:26:58.006358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e544c0, cid 0, qid 0 00:28:15.198 [2024-07-15 16:26:58.006371] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54620, cid 1, qid 0 00:28:15.198 [2024-07-15 16:26:58.006379] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54780, cid 2, qid 0 00:28:15.199 [2024-07-15 16:26:58.006386] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.199 [2024-07-15 16:26:58.006394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54a40, cid 4, qid 0 00:28:15.199 [2024-07-15 16:26:58.006589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.006603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.006609] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.006615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54a40) on tqpair=0x1dec980 00:28:15.199 [2024-07-15 16:26:58.006624] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:15.199 [2024-07-15 16:26:58.006633] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:15.199 [2024-07-15 16:26:58.006649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.006657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dec980) 00:28:15.199 [2024-07-15 16:26:58.006667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.199 [2024-07-15 16:26:58.006686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54a40, cid 4, qid 0 00:28:15.199 [2024-07-15 16:26:58.006911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.199 [2024-07-15 16:26:58.006933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.199 [2024-07-15 16:26:58.006940] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.006946] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dec980): datao=0, datal=4096, cccid=4 00:28:15.199 [2024-07-15 16:26:58.006954] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e54a40) on tqpair(0x1dec980): expected_datao=0, payload_size=4096 00:28:15.199 [2024-07-15 16:26:58.006961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.006971] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.006978] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.007012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.007034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54a40) on tqpair=0x1dec980 00:28:15.199 [2024-07-15 16:26:58.007060] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:15.199 [2024-07-15 16:26:58.007111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dec980) 00:28:15.199 [2024-07-15 16:26:58.007132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.199 [2024-07-15 16:26:58.007142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dec980) 00:28:15.199 [2024-07-15 16:26:58.007163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.199 [2024-07-15 16:26:58.007201] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54a40, cid 4, qid 0 00:28:15.199 [2024-07-15 16:26:58.007212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54ba0, cid 5, qid 0 00:28:15.199 [2024-07-15 16:26:58.007392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.199 [2024-07-15 16:26:58.007404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.199 [2024-07-15 16:26:58.007410] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007416] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dec980): datao=0, datal=1024, cccid=4 00:28:15.199 [2024-07-15 16:26:58.007423] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e54a40) on tqpair(0x1dec980): expected_datao=0, payload_size=1024 00:28:15.199 [2024-07-15 16:26:58.007430] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007439] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007446] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.007462] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.007468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.007474] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54ba0) on tqpair=0x1dec980 00:28:15.199 [2024-07-15 16:26:58.047899] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.047918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.047925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.047932] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54a40) on tqpair=0x1dec980 00:28:15.199 [2024-07-15 16:26:58.047951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.047960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dec980) 00:28:15.199 [2024-07-15 16:26:58.047972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.199 [2024-07-15 16:26:58.048002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54a40, cid 4, qid 0 00:28:15.199 [2024-07-15 16:26:58.048154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.199 [2024-07-15 16:26:58.048169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.199 [2024-07-15 16:26:58.048181] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048187] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dec980): datao=0, datal=3072, cccid=4 00:28:15.199 [2024-07-15 16:26:58.048194] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e54a40) on tqpair(0x1dec980): expected_datao=0, payload_size=3072 00:28:15.199 [2024-07-15 16:26:58.048201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048218] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048227] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.048337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.048343] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048350] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54a40) on tqpair=0x1dec980 00:28:15.199 [2024-07-15 16:26:58.048365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dec980) 00:28:15.199 [2024-07-15 16:26:58.048382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.199 [2024-07-15 16:26:58.048413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e54a40, cid 4, qid 0 00:28:15.199 [2024-07-15 16:26:58.048527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.199 [2024-07-15 16:26:58.048539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.199 [2024-07-15 16:26:58.048545] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048551] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dec980): datao=0, datal=8, cccid=4 00:28:15.199 [2024-07-15 16:26:58.048558] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e54a40) on tqpair(0x1dec980): expected_datao=0, payload_size=8 00:28:15.199 [2024-07-15 16:26:58.048565] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048574] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.048580] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.088949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.199 [2024-07-15 16:26:58.088966] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.199 [2024-07-15 16:26:58.088973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.199 [2024-07-15 16:26:58.088979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e54a40) on tqpair=0x1dec980 00:28:15.199 ===================================================== 00:28:15.199 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:15.199 ===================================================== 00:28:15.199 Controller Capabilities/Features 00:28:15.199 ================================ 00:28:15.199 Vendor ID: 0000 00:28:15.199 Subsystem Vendor ID: 0000 00:28:15.199 Serial Number: .................... 00:28:15.199 Model Number: ........................................ 00:28:15.199 Firmware Version: 24.05.1 00:28:15.199 Recommended Arb Burst: 0 00:28:15.199 IEEE OUI Identifier: 00 00 00 00:28:15.199 Multi-path I/O 00:28:15.199 May have multiple subsystem ports: No 00:28:15.199 May have multiple controllers: No 00:28:15.199 Associated with SR-IOV VF: No 00:28:15.199 Max Data Transfer Size: 131072 00:28:15.199 Max Number of Namespaces: 0 00:28:15.199 Max Number of I/O Queues: 1024 00:28:15.199 NVMe Specification Version (VS): 1.3 00:28:15.199 NVMe Specification Version (Identify): 1.3 00:28:15.199 Maximum Queue Entries: 128 00:28:15.199 Contiguous Queues Required: Yes 00:28:15.199 Arbitration Mechanisms Supported 00:28:15.199 Weighted Round Robin: Not Supported 00:28:15.199 Vendor Specific: Not Supported 00:28:15.199 Reset Timeout: 15000 ms 00:28:15.199 Doorbell Stride: 4 bytes 00:28:15.199 NVM Subsystem Reset: Not Supported 00:28:15.199 Command Sets Supported 00:28:15.199 NVM Command Set: Supported 00:28:15.199 Boot Partition: Not Supported 00:28:15.199 Memory Page Size Minimum: 4096 bytes 00:28:15.199 Memory Page Size Maximum: 4096 bytes 00:28:15.199 Persistent Memory Region: Not Supported 00:28:15.199 Optional Asynchronous Events Supported 00:28:15.199 Namespace Attribute Notices: Not Supported 00:28:15.199 Firmware Activation Notices: Not Supported 00:28:15.199 ANA Change Notices: Not Supported 00:28:15.199 PLE Aggregate Log Change Notices: Not Supported 00:28:15.199 LBA Status Info Alert Notices: Not Supported 00:28:15.199 EGE Aggregate Log Change Notices: Not Supported 00:28:15.199 Normal NVM Subsystem Shutdown event: Not Supported 00:28:15.199 Zone Descriptor Change Notices: Not Supported 00:28:15.199 Discovery Log Change Notices: Supported 00:28:15.199 Controller Attributes 00:28:15.199 128-bit Host Identifier: Not Supported 00:28:15.199 Non-Operational Permissive Mode: Not Supported 00:28:15.199 NVM Sets: Not Supported 00:28:15.199 Read Recovery Levels: Not Supported 00:28:15.199 Endurance Groups: Not Supported 00:28:15.199 Predictable Latency Mode: Not Supported 00:28:15.199 Traffic Based Keep ALive: Not Supported 00:28:15.199 Namespace Granularity: Not Supported 00:28:15.200 SQ Associations: Not Supported 00:28:15.200 UUID List: Not Supported 00:28:15.200 Multi-Domain Subsystem: Not Supported 00:28:15.200 Fixed Capacity Management: Not Supported 00:28:15.200 Variable Capacity Management: Not Supported 00:28:15.200 Delete Endurance Group: Not Supported 00:28:15.200 Delete NVM Set: Not Supported 00:28:15.200 Extended LBA Formats Supported: Not Supported 00:28:15.200 Flexible Data Placement Supported: Not Supported 00:28:15.200 00:28:15.200 Controller Memory Buffer Support 00:28:15.200 ================================ 00:28:15.200 Supported: No 00:28:15.200 00:28:15.200 Persistent Memory Region Support 00:28:15.200 ================================ 00:28:15.200 Supported: No 00:28:15.200 00:28:15.200 Admin Command Set Attributes 00:28:15.200 ============================ 00:28:15.200 Security Send/Receive: Not Supported 00:28:15.200 Format NVM: Not Supported 00:28:15.200 Firmware Activate/Download: Not Supported 00:28:15.200 Namespace Management: Not Supported 00:28:15.200 Device Self-Test: Not Supported 00:28:15.200 Directives: Not Supported 00:28:15.200 NVMe-MI: Not Supported 00:28:15.200 Virtualization Management: Not Supported 00:28:15.200 Doorbell Buffer Config: Not Supported 00:28:15.200 Get LBA Status Capability: Not Supported 00:28:15.200 Command & Feature Lockdown Capability: Not Supported 00:28:15.200 Abort Command Limit: 1 00:28:15.200 Async Event Request Limit: 4 00:28:15.200 Number of Firmware Slots: N/A 00:28:15.200 Firmware Slot 1 Read-Only: N/A 00:28:15.200 Firmware Activation Without Reset: N/A 00:28:15.200 Multiple Update Detection Support: N/A 00:28:15.200 Firmware Update Granularity: No Information Provided 00:28:15.200 Per-Namespace SMART Log: No 00:28:15.200 Asymmetric Namespace Access Log Page: Not Supported 00:28:15.200 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:15.200 Command Effects Log Page: Not Supported 00:28:15.200 Get Log Page Extended Data: Supported 00:28:15.200 Telemetry Log Pages: Not Supported 00:28:15.200 Persistent Event Log Pages: Not Supported 00:28:15.200 Supported Log Pages Log Page: May Support 00:28:15.200 Commands Supported & Effects Log Page: Not Supported 00:28:15.200 Feature Identifiers & Effects Log Page:May Support 00:28:15.200 NVMe-MI Commands & Effects Log Page: May Support 00:28:15.200 Data Area 4 for Telemetry Log: Not Supported 00:28:15.200 Error Log Page Entries Supported: 128 00:28:15.200 Keep Alive: Not Supported 00:28:15.200 00:28:15.200 NVM Command Set Attributes 00:28:15.200 ========================== 00:28:15.200 Submission Queue Entry Size 00:28:15.200 Max: 1 00:28:15.200 Min: 1 00:28:15.200 Completion Queue Entry Size 00:28:15.200 Max: 1 00:28:15.200 Min: 1 00:28:15.200 Number of Namespaces: 0 00:28:15.200 Compare Command: Not Supported 00:28:15.200 Write Uncorrectable Command: Not Supported 00:28:15.200 Dataset Management Command: Not Supported 00:28:15.200 Write Zeroes Command: Not Supported 00:28:15.200 Set Features Save Field: Not Supported 00:28:15.200 Reservations: Not Supported 00:28:15.200 Timestamp: Not Supported 00:28:15.200 Copy: Not Supported 00:28:15.200 Volatile Write Cache: Not Present 00:28:15.200 Atomic Write Unit (Normal): 1 00:28:15.200 Atomic Write Unit (PFail): 1 00:28:15.200 Atomic Compare & Write Unit: 1 00:28:15.200 Fused Compare & Write: Supported 00:28:15.200 Scatter-Gather List 00:28:15.200 SGL Command Set: Supported 00:28:15.200 SGL Keyed: Supported 00:28:15.200 SGL Bit Bucket Descriptor: Not Supported 00:28:15.200 SGL Metadata Pointer: Not Supported 00:28:15.200 Oversized SGL: Not Supported 00:28:15.200 SGL Metadata Address: Not Supported 00:28:15.200 SGL Offset: Supported 00:28:15.200 Transport SGL Data Block: Not Supported 00:28:15.200 Replay Protected Memory Block: Not Supported 00:28:15.200 00:28:15.200 Firmware Slot Information 00:28:15.200 ========================= 00:28:15.200 Active slot: 0 00:28:15.200 00:28:15.200 00:28:15.200 Error Log 00:28:15.200 ========= 00:28:15.200 00:28:15.200 Active Namespaces 00:28:15.200 ================= 00:28:15.200 Discovery Log Page 00:28:15.200 ================== 00:28:15.200 Generation Counter: 2 00:28:15.200 Number of Records: 2 00:28:15.200 Record Format: 0 00:28:15.200 00:28:15.200 Discovery Log Entry 0 00:28:15.200 ---------------------- 00:28:15.200 Transport Type: 3 (TCP) 00:28:15.200 Address Family: 1 (IPv4) 00:28:15.200 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:15.200 Entry Flags: 00:28:15.200 Duplicate Returned Information: 1 00:28:15.200 Explicit Persistent Connection Support for Discovery: 1 00:28:15.200 Transport Requirements: 00:28:15.200 Secure Channel: Not Required 00:28:15.200 Port ID: 0 (0x0000) 00:28:15.200 Controller ID: 65535 (0xffff) 00:28:15.200 Admin Max SQ Size: 128 00:28:15.200 Transport Service Identifier: 4420 00:28:15.200 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:15.200 Transport Address: 10.0.0.2 00:28:15.200 Discovery Log Entry 1 00:28:15.200 ---------------------- 00:28:15.200 Transport Type: 3 (TCP) 00:28:15.200 Address Family: 1 (IPv4) 00:28:15.200 Subsystem Type: 2 (NVM Subsystem) 00:28:15.200 Entry Flags: 00:28:15.200 Duplicate Returned Information: 0 00:28:15.200 Explicit Persistent Connection Support for Discovery: 0 00:28:15.200 Transport Requirements: 00:28:15.200 Secure Channel: Not Required 00:28:15.200 Port ID: 0 (0x0000) 00:28:15.200 Controller ID: 65535 (0xffff) 00:28:15.200 Admin Max SQ Size: 128 00:28:15.200 Transport Service Identifier: 4420 00:28:15.200 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:15.200 Transport Address: 10.0.0.2 [2024-07-15 16:26:58.089085] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:15.200 [2024-07-15 16:26:58.089109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.200 [2024-07-15 16:26:58.089121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.200 [2024-07-15 16:26:58.089130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.200 [2024-07-15 16:26:58.089138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.200 [2024-07-15 16:26:58.089155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089170] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.200 [2024-07-15 16:26:58.089180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.200 [2024-07-15 16:26:58.089208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.200 [2024-07-15 16:26:58.089350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.200 [2024-07-15 16:26:58.089364] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.200 [2024-07-15 16:26:58.089370] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089376] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.200 [2024-07-15 16:26:58.089389] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089396] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089402] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.200 [2024-07-15 16:26:58.089412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.200 [2024-07-15 16:26:58.089437] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.200 [2024-07-15 16:26:58.089552] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.200 [2024-07-15 16:26:58.089565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.200 [2024-07-15 16:26:58.089572] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.200 [2024-07-15 16:26:58.089592] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:15.200 [2024-07-15 16:26:58.089600] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:15.200 [2024-07-15 16:26:58.089616] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089630] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.200 [2024-07-15 16:26:58.089639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.200 [2024-07-15 16:26:58.089658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.200 [2024-07-15 16:26:58.089774] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.200 [2024-07-15 16:26:58.089789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.200 [2024-07-15 16:26:58.089796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089802] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.200 [2024-07-15 16:26:58.089821] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089829] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.200 [2024-07-15 16:26:58.089835] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.200 [2024-07-15 16:26:58.089846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.200 [2024-07-15 16:26:58.089866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.200 [2024-07-15 16:26:58.090008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.200 [2024-07-15 16:26:58.090019] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.090026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.090064] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090078] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.201 [2024-07-15 16:26:58.090088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.201 [2024-07-15 16:26:58.090108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.201 [2024-07-15 16:26:58.090247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.201 [2024-07-15 16:26:58.090261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.090268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090274] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.090290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.201 [2024-07-15 16:26:58.090314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.201 [2024-07-15 16:26:58.090333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.201 [2024-07-15 16:26:58.090447] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.201 [2024-07-15 16:26:58.090465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.090472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090478] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.090495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090509] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.201 [2024-07-15 16:26:58.090519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.201 [2024-07-15 16:26:58.090539] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.201 [2024-07-15 16:26:58.090627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.201 [2024-07-15 16:26:58.090641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.090647] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.090669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.090684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.201 [2024-07-15 16:26:58.090693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.201 [2024-07-15 16:26:58.090712] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.201 [2024-07-15 16:26:58.094768] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.201 [2024-07-15 16:26:58.094785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.094792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.094798] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.094816] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.094825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.094831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dec980) 00:28:15.201 [2024-07-15 16:26:58.094842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.201 [2024-07-15 16:26:58.094863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e548e0, cid 3, qid 0 00:28:15.201 [2024-07-15 16:26:58.095011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.201 [2024-07-15 16:26:58.095023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.201 [2024-07-15 16:26:58.095029] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.095036] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e548e0) on tqpair=0x1dec980 00:28:15.201 [2024-07-15 16:26:58.095064] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:15.201 00:28:15.201 16:26:58 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:15.201 [2024-07-15 16:26:58.126078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:15.201 [2024-07-15 16:26:58.126133] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415965 ] 00:28:15.201 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.201 [2024-07-15 16:26:58.157487] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:15.201 [2024-07-15 16:26:58.157531] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:15.201 [2024-07-15 16:26:58.157540] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:15.201 [2024-07-15 16:26:58.157557] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:15.201 [2024-07-15 16:26:58.157568] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:15.201 [2024-07-15 16:26:58.160776] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:15.201 [2024-07-15 16:26:58.160816] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d9c980 0 00:28:15.201 [2024-07-15 16:26:58.168755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:15.201 [2024-07-15 16:26:58.168774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:15.201 [2024-07-15 16:26:58.168792] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:15.201 [2024-07-15 16:26:58.168798] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:15.201 [2024-07-15 16:26:58.168835] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.168857] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.201 [2024-07-15 16:26:58.168864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.201 [2024-07-15 16:26:58.168877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:15.201 [2024-07-15 16:26:58.168903] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.462 [2024-07-15 16:26:58.176751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.462 [2024-07-15 16:26:58.176769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.462 [2024-07-15 16:26:58.176777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.176784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.462 [2024-07-15 16:26:58.176799] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:15.462 [2024-07-15 16:26:58.176809] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:15.462 [2024-07-15 16:26:58.176818] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:15.462 [2024-07-15 16:26:58.176837] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.176845] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.176851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.462 [2024-07-15 16:26:58.176862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-15 16:26:58.176886] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.462 [2024-07-15 16:26:58.177035] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.462 [2024-07-15 16:26:58.177050] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.462 [2024-07-15 16:26:58.177057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177063] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.462 [2024-07-15 16:26:58.177076] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:15.462 [2024-07-15 16:26:58.177094] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:15.462 [2024-07-15 16:26:58.177107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177120] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.462 [2024-07-15 16:26:58.177130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-15 16:26:58.177151] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.462 [2024-07-15 16:26:58.177241] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.462 [2024-07-15 16:26:58.177256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.462 [2024-07-15 16:26:58.177263] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.462 [2024-07-15 16:26:58.177278] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:15.462 [2024-07-15 16:26:58.177291] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:15.462 [2024-07-15 16:26:58.177303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177310] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.462 [2024-07-15 16:26:58.177316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.462 [2024-07-15 16:26:58.177325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.462 [2024-07-15 16:26:58.177345] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.462 [2024-07-15 16:26:58.177436] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.462 [2024-07-15 16:26:58.177450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.177457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.177472] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:15.463 [2024-07-15 16:26:58.177489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177497] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.177513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.463 [2024-07-15 16:26:58.177533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.463 [2024-07-15 16:26:58.177621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.463 [2024-07-15 16:26:58.177633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.177639] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177646] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.177654] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:15.463 [2024-07-15 16:26:58.177662] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:15.463 [2024-07-15 16:26:58.177674] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:15.463 [2024-07-15 16:26:58.177787] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:15.463 [2024-07-15 16:26:58.177798] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:15.463 [2024-07-15 16:26:58.177809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.177823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.177833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.463 [2024-07-15 16:26:58.177854] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.463 [2024-07-15 16:26:58.178034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.463 [2024-07-15 16:26:58.178049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.178056] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.178071] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:15.463 [2024-07-15 16:26:58.178088] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178102] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.178112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.463 [2024-07-15 16:26:58.178132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.463 [2024-07-15 16:26:58.178236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.463 [2024-07-15 16:26:58.178247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.178254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.178268] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:15.463 [2024-07-15 16:26:58.178276] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:15.463 [2024-07-15 16:26:58.178288] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:15.463 [2024-07-15 16:26:58.178301] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:15.463 [2024-07-15 16:26:58.178316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.178333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.463 [2024-07-15 16:26:58.178353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.463 [2024-07-15 16:26:58.178474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.463 [2024-07-15 16:26:58.178489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.463 [2024-07-15 16:26:58.178496] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178505] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=4096, cccid=0 00:28:15.463 [2024-07-15 16:26:58.178513] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e044c0) on tqpair(0x1d9c980): expected_datao=0, payload_size=4096 00:28:15.463 [2024-07-15 16:26:58.178520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178537] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.178546] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.218879] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.463 [2024-07-15 16:26:58.218897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.218905] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.218911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.218928] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:15.463 [2024-07-15 16:26:58.218937] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:15.463 [2024-07-15 16:26:58.218945] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:15.463 [2024-07-15 16:26:58.218951] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:15.463 [2024-07-15 16:26:58.218958] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:15.463 [2024-07-15 16:26:58.218966] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:15.463 [2024-07-15 16:26:58.218981] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:15.463 [2024-07-15 16:26:58.218992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.219000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.219006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.219017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:15.463 [2024-07-15 16:26:58.219054] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.463 [2024-07-15 16:26:58.219169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.463 [2024-07-15 16:26:58.219181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.463 [2024-07-15 16:26:58.219188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.219194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e044c0) on tqpair=0x1d9c980 00:28:15.463 [2024-07-15 16:26:58.219205] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.219212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.463 [2024-07-15 16:26:58.219218] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d9c980) 00:28:15.463 [2024-07-15 16:26:58.219228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.463 [2024-07-15 16:26:58.219237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.219257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-15 16:26:58.219266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219282] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.219291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-15 16:26:58.219300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.219320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.464 [2024-07-15 16:26:58.219328] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.219345] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.219357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.219373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-15 16:26:58.219394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e044c0, cid 0, qid 0 00:28:15.464 [2024-07-15 16:26:58.219405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04620, cid 1, qid 0 00:28:15.464 [2024-07-15 16:26:58.219412] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04780, cid 2, qid 0 00:28:15.464 [2024-07-15 16:26:58.219419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.464 [2024-07-15 16:26:58.219426] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.464 [2024-07-15 16:26:58.219597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.464 [2024-07-15 16:26:58.219611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.464 [2024-07-15 16:26:58.219618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.464 [2024-07-15 16:26:58.219633] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:15.464 [2024-07-15 16:26:58.219642] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.219655] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.219676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.219686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.219708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:15.464 [2024-07-15 16:26:58.219728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.464 [2024-07-15 16:26:58.219922] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.464 [2024-07-15 16:26:58.219938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.464 [2024-07-15 16:26:58.219945] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.219956] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.464 [2024-07-15 16:26:58.220025] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.220059] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.220074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.220081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.220091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-15 16:26:58.220111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.464 [2024-07-15 16:26:58.220298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.464 [2024-07-15 16:26:58.220312] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.464 [2024-07-15 16:26:58.220319] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.220325] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=4096, cccid=4 00:28:15.464 [2024-07-15 16:26:58.220332] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04a40) on tqpair(0x1d9c980): expected_datao=0, payload_size=4096 00:28:15.464 [2024-07-15 16:26:58.220339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.220356] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.220365] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.264766] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.464 [2024-07-15 16:26:58.264784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.464 [2024-07-15 16:26:58.264791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.264798] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.464 [2024-07-15 16:26:58.264815] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:15.464 [2024-07-15 16:26:58.264839] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.264857] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:15.464 [2024-07-15 16:26:58.264875] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.264883] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.464 [2024-07-15 16:26:58.264893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-15 16:26:58.264916] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.464 [2024-07-15 16:26:58.265102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.464 [2024-07-15 16:26:58.265117] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.464 [2024-07-15 16:26:58.265124] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.464 [2024-07-15 16:26:58.265130] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=4096, cccid=4 00:28:15.465 [2024-07-15 16:26:58.265137] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04a40) on tqpair(0x1d9c980): expected_datao=0, payload_size=4096 00:28:15.465 [2024-07-15 16:26:58.265144] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265154] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265165] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.265187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.265194] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.265222] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265241] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265254] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.265271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.265292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.465 [2024-07-15 16:26:58.265404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.465 [2024-07-15 16:26:58.265419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.465 [2024-07-15 16:26:58.265426] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265432] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=4096, cccid=4 00:28:15.465 [2024-07-15 16:26:58.265439] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04a40) on tqpair(0x1d9c980): expected_datao=0, payload_size=4096 00:28:15.465 [2024-07-15 16:26:58.265445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265455] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265462] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.265483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.265490] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265496] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.265510] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265524] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265538] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265549] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265565] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:15.465 [2024-07-15 16:26:58.265572] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:15.465 [2024-07-15 16:26:58.265580] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:15.465 [2024-07-15 16:26:58.265601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.265623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.265634] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.265655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.465 [2024-07-15 16:26:58.265678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.465 [2024-07-15 16:26:58.265689] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04ba0, cid 5, qid 0 00:28:15.465 [2024-07-15 16:26:58.265911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.265926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.265933] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.265951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.265961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.265968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.265975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04ba0) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.265992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.266011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.266032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04ba0, cid 5, qid 0 00:28:15.465 [2024-07-15 16:26:58.266154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.266168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.266175] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266181] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04ba0) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.266199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.266217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.266236] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04ba0, cid 5, qid 0 00:28:15.465 [2024-07-15 16:26:58.266341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.266353] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.266360] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266367] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04ba0) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.266383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.266401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.266420] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04ba0, cid 5, qid 0 00:28:15.465 [2024-07-15 16:26:58.266508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.465 [2024-07-15 16:26:58.266520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.465 [2024-07-15 16:26:58.266527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04ba0) on tqpair=0x1d9c980 00:28:15.465 [2024-07-15 16:26:58.266552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.266571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.266582] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.465 [2024-07-15 16:26:58.266589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d9c980) 00:28:15.465 [2024-07-15 16:26:58.266598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-15 16:26:58.266608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d9c980) 00:28:15.466 [2024-07-15 16:26:58.266623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-15 16:26:58.266634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266641] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d9c980) 00:28:15.466 [2024-07-15 16:26:58.266650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.466 [2024-07-15 16:26:58.266670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04ba0, cid 5, qid 0 00:28:15.466 [2024-07-15 16:26:58.266680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04a40, cid 4, qid 0 00:28:15.466 [2024-07-15 16:26:58.266688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04d00, cid 6, qid 0 00:28:15.466 [2024-07-15 16:26:58.266695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04e60, cid 7, qid 0 00:28:15.466 [2024-07-15 16:26:58.266892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.466 [2024-07-15 16:26:58.266908] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.466 [2024-07-15 16:26:58.266915] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266921] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=8192, cccid=5 00:28:15.466 [2024-07-15 16:26:58.266928] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04ba0) on tqpair(0x1d9c980): expected_datao=0, payload_size=8192 00:28:15.466 [2024-07-15 16:26:58.266936] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266971] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266981] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.266990] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.466 [2024-07-15 16:26:58.266999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.466 [2024-07-15 16:26:58.267006] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267012] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=512, cccid=4 00:28:15.466 [2024-07-15 16:26:58.267020] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04a40) on tqpair(0x1d9c980): expected_datao=0, payload_size=512 00:28:15.466 [2024-07-15 16:26:58.267030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267040] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267062] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.466 [2024-07-15 16:26:58.267079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.466 [2024-07-15 16:26:58.267086] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267092] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=512, cccid=6 00:28:15.466 [2024-07-15 16:26:58.267099] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04d00) on tqpair(0x1d9c980): expected_datao=0, payload_size=512 00:28:15.466 [2024-07-15 16:26:58.267106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267114] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267121] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267129] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:15.466 [2024-07-15 16:26:58.267138] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:15.466 [2024-07-15 16:26:58.267145] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267151] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d9c980): datao=0, datal=4096, cccid=7 00:28:15.466 [2024-07-15 16:26:58.267158] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e04e60) on tqpair(0x1d9c980): expected_datao=0, payload_size=4096 00:28:15.466 [2024-07-15 16:26:58.267165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267177] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267195] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.466 [2024-07-15 16:26:58.267204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.466 [2024-07-15 16:26:58.267211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04ba0) on tqpair=0x1d9c980 00:28:15.466 [2024-07-15 16:26:58.267236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.466 [2024-07-15 16:26:58.267247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.466 [2024-07-15 16:26:58.267254] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04a40) on tqpair=0x1d9c980 00:28:15.466 [2024-07-15 16:26:58.267274] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.466 [2024-07-15 16:26:58.267284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.466 [2024-07-15 16:26:58.267291] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267297] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04d00) on tqpair=0x1d9c980 00:28:15.466 [2024-07-15 16:26:58.267321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.466 [2024-07-15 16:26:58.267331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.466 [2024-07-15 16:26:58.267338] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.466 [2024-07-15 16:26:58.267344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04e60) on tqpair=0x1d9c980 00:28:15.466 ===================================================== 00:28:15.466 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.466 ===================================================== 00:28:15.466 Controller Capabilities/Features 00:28:15.466 ================================ 00:28:15.466 Vendor ID: 8086 00:28:15.466 Subsystem Vendor ID: 8086 00:28:15.466 Serial Number: SPDK00000000000001 00:28:15.466 Model Number: SPDK bdev Controller 00:28:15.466 Firmware Version: 24.05.1 00:28:15.466 Recommended Arb Burst: 6 00:28:15.466 IEEE OUI Identifier: e4 d2 5c 00:28:15.466 Multi-path I/O 00:28:15.466 May have multiple subsystem ports: Yes 00:28:15.466 May have multiple controllers: Yes 00:28:15.466 Associated with SR-IOV VF: No 00:28:15.466 Max Data Transfer Size: 131072 00:28:15.466 Max Number of Namespaces: 32 00:28:15.466 Max Number of I/O Queues: 127 00:28:15.466 NVMe Specification Version (VS): 1.3 00:28:15.466 NVMe Specification Version (Identify): 1.3 00:28:15.466 Maximum Queue Entries: 128 00:28:15.466 Contiguous Queues Required: Yes 00:28:15.466 Arbitration Mechanisms Supported 00:28:15.466 Weighted Round Robin: Not Supported 00:28:15.466 Vendor Specific: Not Supported 00:28:15.466 Reset Timeout: 15000 ms 00:28:15.466 Doorbell Stride: 4 bytes 00:28:15.466 NVM Subsystem Reset: Not Supported 00:28:15.466 Command Sets Supported 00:28:15.466 NVM Command Set: Supported 00:28:15.466 Boot Partition: Not Supported 00:28:15.466 Memory Page Size Minimum: 4096 bytes 00:28:15.466 Memory Page Size Maximum: 4096 bytes 00:28:15.466 Persistent Memory Region: Not Supported 00:28:15.466 Optional Asynchronous Events Supported 00:28:15.466 Namespace Attribute Notices: Supported 00:28:15.466 Firmware Activation Notices: Not Supported 00:28:15.466 ANA Change Notices: Not Supported 00:28:15.466 PLE Aggregate Log Change Notices: Not Supported 00:28:15.466 LBA Status Info Alert Notices: Not Supported 00:28:15.466 EGE Aggregate Log Change Notices: Not Supported 00:28:15.466 Normal NVM Subsystem Shutdown event: Not Supported 00:28:15.466 Zone Descriptor Change Notices: Not Supported 00:28:15.466 Discovery Log Change Notices: Not Supported 00:28:15.466 Controller Attributes 00:28:15.466 128-bit Host Identifier: Supported 00:28:15.466 Non-Operational Permissive Mode: Not Supported 00:28:15.466 NVM Sets: Not Supported 00:28:15.466 Read Recovery Levels: Not Supported 00:28:15.466 Endurance Groups: Not Supported 00:28:15.466 Predictable Latency Mode: Not Supported 00:28:15.466 Traffic Based Keep ALive: Not Supported 00:28:15.466 Namespace Granularity: Not Supported 00:28:15.466 SQ Associations: Not Supported 00:28:15.466 UUID List: Not Supported 00:28:15.466 Multi-Domain Subsystem: Not Supported 00:28:15.466 Fixed Capacity Management: Not Supported 00:28:15.466 Variable Capacity Management: Not Supported 00:28:15.466 Delete Endurance Group: Not Supported 00:28:15.466 Delete NVM Set: Not Supported 00:28:15.466 Extended LBA Formats Supported: Not Supported 00:28:15.467 Flexible Data Placement Supported: Not Supported 00:28:15.467 00:28:15.467 Controller Memory Buffer Support 00:28:15.467 ================================ 00:28:15.467 Supported: No 00:28:15.467 00:28:15.467 Persistent Memory Region Support 00:28:15.467 ================================ 00:28:15.467 Supported: No 00:28:15.467 00:28:15.467 Admin Command Set Attributes 00:28:15.467 ============================ 00:28:15.467 Security Send/Receive: Not Supported 00:28:15.467 Format NVM: Not Supported 00:28:15.467 Firmware Activate/Download: Not Supported 00:28:15.467 Namespace Management: Not Supported 00:28:15.467 Device Self-Test: Not Supported 00:28:15.467 Directives: Not Supported 00:28:15.467 NVMe-MI: Not Supported 00:28:15.467 Virtualization Management: Not Supported 00:28:15.467 Doorbell Buffer Config: Not Supported 00:28:15.467 Get LBA Status Capability: Not Supported 00:28:15.467 Command & Feature Lockdown Capability: Not Supported 00:28:15.467 Abort Command Limit: 4 00:28:15.467 Async Event Request Limit: 4 00:28:15.467 Number of Firmware Slots: N/A 00:28:15.467 Firmware Slot 1 Read-Only: N/A 00:28:15.467 Firmware Activation Without Reset: N/A 00:28:15.467 Multiple Update Detection Support: N/A 00:28:15.467 Firmware Update Granularity: No Information Provided 00:28:15.467 Per-Namespace SMART Log: No 00:28:15.467 Asymmetric Namespace Access Log Page: Not Supported 00:28:15.467 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:15.467 Command Effects Log Page: Supported 00:28:15.467 Get Log Page Extended Data: Supported 00:28:15.467 Telemetry Log Pages: Not Supported 00:28:15.467 Persistent Event Log Pages: Not Supported 00:28:15.467 Supported Log Pages Log Page: May Support 00:28:15.467 Commands Supported & Effects Log Page: Not Supported 00:28:15.467 Feature Identifiers & Effects Log Page:May Support 00:28:15.467 NVMe-MI Commands & Effects Log Page: May Support 00:28:15.467 Data Area 4 for Telemetry Log: Not Supported 00:28:15.467 Error Log Page Entries Supported: 128 00:28:15.467 Keep Alive: Supported 00:28:15.467 Keep Alive Granularity: 10000 ms 00:28:15.467 00:28:15.467 NVM Command Set Attributes 00:28:15.467 ========================== 00:28:15.467 Submission Queue Entry Size 00:28:15.467 Max: 64 00:28:15.467 Min: 64 00:28:15.467 Completion Queue Entry Size 00:28:15.467 Max: 16 00:28:15.467 Min: 16 00:28:15.467 Number of Namespaces: 32 00:28:15.467 Compare Command: Supported 00:28:15.467 Write Uncorrectable Command: Not Supported 00:28:15.467 Dataset Management Command: Supported 00:28:15.467 Write Zeroes Command: Supported 00:28:15.467 Set Features Save Field: Not Supported 00:28:15.467 Reservations: Supported 00:28:15.467 Timestamp: Not Supported 00:28:15.467 Copy: Supported 00:28:15.467 Volatile Write Cache: Present 00:28:15.467 Atomic Write Unit (Normal): 1 00:28:15.467 Atomic Write Unit (PFail): 1 00:28:15.467 Atomic Compare & Write Unit: 1 00:28:15.467 Fused Compare & Write: Supported 00:28:15.467 Scatter-Gather List 00:28:15.467 SGL Command Set: Supported 00:28:15.467 SGL Keyed: Supported 00:28:15.467 SGL Bit Bucket Descriptor: Not Supported 00:28:15.467 SGL Metadata Pointer: Not Supported 00:28:15.467 Oversized SGL: Not Supported 00:28:15.467 SGL Metadata Address: Not Supported 00:28:15.467 SGL Offset: Supported 00:28:15.467 Transport SGL Data Block: Not Supported 00:28:15.467 Replay Protected Memory Block: Not Supported 00:28:15.467 00:28:15.467 Firmware Slot Information 00:28:15.467 ========================= 00:28:15.467 Active slot: 1 00:28:15.467 Slot 1 Firmware Revision: 24.05.1 00:28:15.467 00:28:15.467 00:28:15.467 Commands Supported and Effects 00:28:15.467 ============================== 00:28:15.467 Admin Commands 00:28:15.467 -------------- 00:28:15.467 Get Log Page (02h): Supported 00:28:15.467 Identify (06h): Supported 00:28:15.467 Abort (08h): Supported 00:28:15.467 Set Features (09h): Supported 00:28:15.467 Get Features (0Ah): Supported 00:28:15.467 Asynchronous Event Request (0Ch): Supported 00:28:15.467 Keep Alive (18h): Supported 00:28:15.467 I/O Commands 00:28:15.467 ------------ 00:28:15.467 Flush (00h): Supported LBA-Change 00:28:15.467 Write (01h): Supported LBA-Change 00:28:15.467 Read (02h): Supported 00:28:15.467 Compare (05h): Supported 00:28:15.467 Write Zeroes (08h): Supported LBA-Change 00:28:15.467 Dataset Management (09h): Supported LBA-Change 00:28:15.467 Copy (19h): Supported LBA-Change 00:28:15.467 Unknown (79h): Supported LBA-Change 00:28:15.467 Unknown (7Ah): Supported 00:28:15.467 00:28:15.467 Error Log 00:28:15.467 ========= 00:28:15.467 00:28:15.467 Arbitration 00:28:15.467 =========== 00:28:15.467 Arbitration Burst: 1 00:28:15.467 00:28:15.467 Power Management 00:28:15.467 ================ 00:28:15.467 Number of Power States: 1 00:28:15.467 Current Power State: Power State #0 00:28:15.467 Power State #0: 00:28:15.467 Max Power: 0.00 W 00:28:15.467 Non-Operational State: Operational 00:28:15.467 Entry Latency: Not Reported 00:28:15.467 Exit Latency: Not Reported 00:28:15.467 Relative Read Throughput: 0 00:28:15.467 Relative Read Latency: 0 00:28:15.467 Relative Write Throughput: 0 00:28:15.467 Relative Write Latency: 0 00:28:15.467 Idle Power: Not Reported 00:28:15.467 Active Power: Not Reported 00:28:15.467 Non-Operational Permissive Mode: Not Supported 00:28:15.467 00:28:15.467 Health Information 00:28:15.467 ================== 00:28:15.467 Critical Warnings: 00:28:15.467 Available Spare Space: OK 00:28:15.467 Temperature: OK 00:28:15.467 Device Reliability: OK 00:28:15.467 Read Only: No 00:28:15.467 Volatile Memory Backup: OK 00:28:15.467 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:15.467 Temperature Threshold: [2024-07-15 16:26:58.267455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.467 [2024-07-15 16:26:58.267467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d9c980) 00:28:15.467 [2024-07-15 16:26:58.267477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.467 [2024-07-15 16:26:58.267498] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e04e60, cid 7, qid 0 00:28:15.467 [2024-07-15 16:26:58.267650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.467 [2024-07-15 16:26:58.267663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.467 [2024-07-15 16:26:58.267670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.467 [2024-07-15 16:26:58.267676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e04e60) on tqpair=0x1d9c980 00:28:15.467 [2024-07-15 16:26:58.267714] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:15.467 [2024-07-15 16:26:58.267758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-15 16:26:58.267771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-15 16:26:58.267780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-15 16:26:58.267790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.467 [2024-07-15 16:26:58.267802] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.467 [2024-07-15 16:26:58.267810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.467 [2024-07-15 16:26:58.267816] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.267830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.267852] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.268012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.268042] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.268049] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.468 [2024-07-15 16:26:58.268067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268080] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.268090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.268115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.268216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.268228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.268235] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.468 [2024-07-15 16:26:58.268249] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:15.468 [2024-07-15 16:26:58.268257] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:15.468 [2024-07-15 16:26:58.268271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.268295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.268315] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.268401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.268413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.268420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.468 [2024-07-15 16:26:58.268443] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268451] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268457] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.268467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.268486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.268568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.268579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.268586] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.468 [2024-07-15 16:26:58.268609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.268623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.268633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.268652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.272752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.272769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.272776] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.272783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.468 [2024-07-15 16:26:58.272802] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.272810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:15.468 [2024-07-15 16:26:58.272817] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d9c980) 00:28:15.468 [2024-07-15 16:26:58.272827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.468 [2024-07-15 16:26:58.272849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e048e0, cid 3, qid 0 00:28:15.468 [2024-07-15 16:26:58.272990] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:15.468 [2024-07-15 16:26:58.273005] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:15.468 [2024-07-15 16:26:58.273027] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:15.469 [2024-07-15 16:26:58.273034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1e048e0) on tqpair=0x1d9c980 00:28:15.469 [2024-07-15 16:26:58.273049] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:15.469 0 Kelvin (-273 Celsius) 00:28:15.469 Available Spare: 0% 00:28:15.469 Available Spare Threshold: 0% 00:28:15.469 Life Percentage Used: 0% 00:28:15.469 Data Units Read: 0 00:28:15.469 Data Units Written: 0 00:28:15.469 Host Read Commands: 0 00:28:15.469 Host Write Commands: 0 00:28:15.469 Controller Busy Time: 0 minutes 00:28:15.469 Power Cycles: 0 00:28:15.469 Power On Hours: 0 hours 00:28:15.469 Unsafe Shutdowns: 0 00:28:15.469 Unrecoverable Media Errors: 0 00:28:15.469 Lifetime Error Log Entries: 0 00:28:15.469 Warning Temperature Time: 0 minutes 00:28:15.469 Critical Temperature Time: 0 minutes 00:28:15.469 00:28:15.469 Number of Queues 00:28:15.469 ================ 00:28:15.469 Number of I/O Submission Queues: 127 00:28:15.469 Number of I/O Completion Queues: 127 00:28:15.469 00:28:15.469 Active Namespaces 00:28:15.469 ================= 00:28:15.469 Namespace ID:1 00:28:15.469 Error Recovery Timeout: Unlimited 00:28:15.469 Command Set Identifier: NVM (00h) 00:28:15.469 Deallocate: Supported 00:28:15.469 Deallocated/Unwritten Error: Not Supported 00:28:15.469 Deallocated Read Value: Unknown 00:28:15.469 Deallocate in Write Zeroes: Not Supported 00:28:15.469 Deallocated Guard Field: 0xFFFF 00:28:15.469 Flush: Supported 00:28:15.469 Reservation: Supported 00:28:15.469 Namespace Sharing Capabilities: Multiple Controllers 00:28:15.469 Size (in LBAs): 131072 (0GiB) 00:28:15.469 Capacity (in LBAs): 131072 (0GiB) 00:28:15.469 Utilization (in LBAs): 131072 (0GiB) 00:28:15.469 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:15.469 EUI64: ABCDEF0123456789 00:28:15.469 UUID: 65faaa7f-432f-40e8-896c-2eeaa62cf122 00:28:15.469 Thin Provisioning: Not Supported 00:28:15.469 Per-NS Atomic Units: Yes 00:28:15.469 Atomic Boundary Size (Normal): 0 00:28:15.469 Atomic Boundary Size (PFail): 0 00:28:15.469 Atomic Boundary Offset: 0 00:28:15.469 Maximum Single Source Range Length: 65535 00:28:15.469 Maximum Copy Length: 65535 00:28:15.469 Maximum Source Range Count: 1 00:28:15.469 NGUID/EUI64 Never Reused: No 00:28:15.469 Namespace Write Protected: No 00:28:15.469 Number of LBA Formats: 1 00:28:15.469 Current LBA Format: LBA Format #00 00:28:15.469 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:15.469 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.469 rmmod nvme_tcp 00:28:15.469 rmmod nvme_fabrics 00:28:15.469 rmmod nvme_keyring 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 415930 ']' 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 415930 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 415930 ']' 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 415930 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 415930 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 415930' 00:28:15.469 killing process with pid 415930 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 415930 00:28:15.469 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 415930 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.727 16:26:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.260 16:27:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:18.260 00:28:18.260 real 0m5.128s 00:28:18.260 user 0m4.040s 00:28:18.260 sys 0m1.695s 00:28:18.260 16:27:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:18.260 16:27:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:18.260 ************************************ 00:28:18.260 END TEST nvmf_identify 00:28:18.260 ************************************ 00:28:18.260 16:27:00 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:18.260 16:27:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:18.260 16:27:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:18.260 16:27:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.260 ************************************ 00:28:18.260 START TEST nvmf_perf 00:28:18.260 ************************************ 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:18.260 * Looking for test storage... 00:28:18.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.260 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.261 16:27:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:20.160 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:20.160 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:20.160 Found net devices under 0000:84:00.0: cvl_0_0 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:20.160 Found net devices under 0000:84:00.1: cvl_0_1 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:28:20.160 00:28:20.160 --- 10.0.0.2 ping statistics --- 00:28:20.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.160 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:20.160 00:28:20.160 --- 10.0.0.1 ping statistics --- 00:28:20.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.160 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.160 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=418020 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 418020 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 418020 ']' 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:20.161 16:27:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.161 [2024-07-15 16:27:02.861672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:20.161 [2024-07-15 16:27:02.861803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.161 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.161 [2024-07-15 16:27:02.929607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.161 [2024-07-15 16:27:03.022434] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.161 [2024-07-15 16:27:03.022490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.161 [2024-07-15 16:27:03.022520] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.161 [2024-07-15 16:27:03.022531] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.161 [2024-07-15 16:27:03.022542] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.161 [2024-07-15 16:27:03.022626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.161 [2024-07-15 16:27:03.022695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.161 [2024-07-15 16:27:03.022876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.161 [2024-07-15 16:27:03.022879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:20.418 16:27:03 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:23.703 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:23.703 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:23.703 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:28:23.703 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:23.960 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:23.960 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:28:23.960 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:23.960 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:23.960 16:27:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:24.218 [2024-07-15 16:27:07.050777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.218 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.475 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:24.475 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.733 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:24.733 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:24.991 16:27:07 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.249 [2024-07-15 16:27:08.110649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.249 16:27:08 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:25.508 16:27:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:28:25.508 16:27:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:25.508 16:27:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:25.508 16:27:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:28:26.882 Initializing NVMe Controllers 00:28:26.882 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:28:26.882 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:28:26.882 Initialization complete. Launching workers. 00:28:26.882 ======================================================== 00:28:26.882 Latency(us) 00:28:26.882 Device Information : IOPS MiB/s Average min max 00:28:26.882 PCIE (0000:82:00.0) NSID 1 from core 0: 85147.08 332.61 375.26 11.63 6119.76 00:28:26.882 ======================================================== 00:28:26.882 Total : 85147.08 332.61 375.26 11.63 6119.76 00:28:26.882 00:28:26.882 16:27:09 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.882 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.261 Initializing NVMe Controllers 00:28:28.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.261 Initialization complete. Launching workers. 00:28:28.261 ======================================================== 00:28:28.261 Latency(us) 00:28:28.261 Device Information : IOPS MiB/s Average min max 00:28:28.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 10929.90 148.42 45693.78 00:28:28.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18843.27 7942.54 47907.11 00:28:28.261 ======================================================== 00:28:28.261 Total : 147.00 0.57 13890.68 148.42 47907.11 00:28:28.261 00:28:28.261 16:27:11 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.261 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.634 Initializing NVMe Controllers 00:28:29.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:29.634 Initialization complete. Launching workers. 00:28:29.634 ======================================================== 00:28:29.634 Latency(us) 00:28:29.634 Device Information : IOPS MiB/s Average min max 00:28:29.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8550.18 33.40 3743.33 581.63 8333.12 00:28:29.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.00 15.26 8218.53 5208.45 16212.74 00:28:29.634 ======================================================== 00:28:29.634 Total : 12457.19 48.66 5146.90 581.63 16212.74 00:28:29.634 00:28:29.634 16:27:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:29.634 16:27:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:29.634 16:27:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.892 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.426 Initializing NVMe Controllers 00:28:32.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.426 Controller IO queue size 128, less than required. 00:28:32.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.426 Controller IO queue size 128, less than required. 00:28:32.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.426 Initialization complete. Launching workers. 00:28:32.426 ======================================================== 00:28:32.426 Latency(us) 00:28:32.426 Device Information : IOPS MiB/s Average min max 00:28:32.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1372.20 343.05 94905.76 56882.88 140899.73 00:28:32.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.87 148.72 226786.56 71785.11 344849.95 00:28:32.426 ======================================================== 00:28:32.426 Total : 1967.08 491.77 134788.39 56882.88 344849.95 00:28:32.426 00:28:32.426 16:27:15 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:32.426 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.684 No valid NVMe controllers or AIO or URING devices found 00:28:32.684 Initializing NVMe Controllers 00:28:32.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.684 Controller IO queue size 128, less than required. 00:28:32.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.684 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:32.684 Controller IO queue size 128, less than required. 00:28:32.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.684 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:32.684 WARNING: Some requested NVMe devices were skipped 00:28:32.684 16:27:15 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:32.684 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.213 Initializing NVMe Controllers 00:28:35.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.213 Controller IO queue size 128, less than required. 00:28:35.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.213 Controller IO queue size 128, less than required. 00:28:35.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:35.213 Initialization complete. Launching workers. 00:28:35.213 00:28:35.213 ==================== 00:28:35.213 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:35.213 TCP transport: 00:28:35.213 polls: 9675 00:28:35.213 idle_polls: 5554 00:28:35.213 sock_completions: 4121 00:28:35.213 nvme_completions: 5071 00:28:35.213 submitted_requests: 7644 00:28:35.213 queued_requests: 1 00:28:35.213 00:28:35.213 ==================== 00:28:35.213 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:35.213 TCP transport: 00:28:35.213 polls: 12628 00:28:35.213 idle_polls: 8481 00:28:35.213 sock_completions: 4147 00:28:35.213 nvme_completions: 5337 00:28:35.213 submitted_requests: 8048 00:28:35.213 queued_requests: 1 00:28:35.213 ======================================================== 00:28:35.213 Latency(us) 00:28:35.213 Device Information : IOPS MiB/s Average min max 00:28:35.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1264.70 316.17 104230.21 64515.70 171296.80 00:28:35.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1331.05 332.76 97708.97 45440.49 136553.60 00:28:35.213 ======================================================== 00:28:35.213 Total : 2595.75 648.94 100886.24 45440.49 171296.80 00:28:35.213 00:28:35.213 16:27:17 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:35.213 16:27:17 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.213 16:27:18 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:35.213 16:27:18 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:82:00.0 ']' 00:28:35.213 16:27:18 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=079d417c-1c97-4078-a2b4-1c684016cd45 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 079d417c-1c97-4078-a2b4-1c684016cd45 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=079d417c-1c97-4078-a2b4-1c684016cd45 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:38.495 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.752 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:38.753 { 00:28:38.753 "uuid": "079d417c-1c97-4078-a2b4-1c684016cd45", 00:28:38.753 "name": "lvs_0", 00:28:38.753 "base_bdev": "Nvme0n1", 00:28:38.753 "total_data_clusters": 238234, 00:28:38.753 "free_clusters": 238234, 00:28:38.753 "block_size": 512, 00:28:38.753 "cluster_size": 4194304 00:28:38.753 } 00:28:38.753 ]' 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="079d417c-1c97-4078-a2b4-1c684016cd45") .free_clusters' 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="079d417c-1c97-4078-a2b4-1c684016cd45") .cluster_size' 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:38.753 952936 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:38.753 16:27:21 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 079d417c-1c97-4078-a2b4-1c684016cd45 lbd_0 20480 00:28:39.691 16:27:22 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b594e78b-9586-45a8-9577-a0716c18da5b 00:28:39.691 16:27:22 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b594e78b-9586-45a8-9577-a0716c18da5b lvs_n_0 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bdae3405-c49a-4746-b258-1e13fd02e4fd 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bdae3405-c49a-4746-b258-1e13fd02e4fd 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=bdae3405-c49a-4746-b258-1e13fd02e4fd 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:40.258 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:40.524 { 00:28:40.524 "uuid": "079d417c-1c97-4078-a2b4-1c684016cd45", 00:28:40.524 "name": "lvs_0", 00:28:40.524 "base_bdev": "Nvme0n1", 00:28:40.524 "total_data_clusters": 238234, 00:28:40.524 "free_clusters": 233114, 00:28:40.524 "block_size": 512, 00:28:40.524 "cluster_size": 4194304 00:28:40.524 }, 00:28:40.524 { 00:28:40.524 "uuid": "bdae3405-c49a-4746-b258-1e13fd02e4fd", 00:28:40.524 "name": "lvs_n_0", 00:28:40.524 "base_bdev": "b594e78b-9586-45a8-9577-a0716c18da5b", 00:28:40.524 "total_data_clusters": 5114, 00:28:40.524 "free_clusters": 5114, 00:28:40.524 "block_size": 512, 00:28:40.524 "cluster_size": 4194304 00:28:40.524 } 00:28:40.524 ]' 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="bdae3405-c49a-4746-b258-1e13fd02e4fd") .free_clusters' 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="bdae3405-c49a-4746-b258-1e13fd02e4fd") .cluster_size' 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:40.524 20456 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:40.524 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bdae3405-c49a-4746-b258-1e13fd02e4fd lbd_nest_0 20456 00:28:40.812 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6216955e-3c2e-4348-95c5-a6ca602eec8c 00:28:40.812 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.097 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:41.097 16:27:23 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6216955e-3c2e-4348-95c5-a6ca602eec8c 00:28:41.355 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.613 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:41.613 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:41.613 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:41.613 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:41.613 16:27:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.613 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.814 Initializing NVMe Controllers 00:28:53.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.814 Initialization complete. Launching workers. 00:28:53.814 ======================================================== 00:28:53.814 Latency(us) 00:28:53.814 Device Information : IOPS MiB/s Average min max 00:28:53.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.70 0.02 21463.48 176.10 45710.32 00:28:53.814 ======================================================== 00:28:53.814 Total : 46.70 0.02 21463.48 176.10 45710.32 00:28:53.814 00:28:53.814 16:27:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:53.814 16:27:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.814 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.810 Initializing NVMe Controllers 00:29:03.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.810 Initialization complete. Launching workers. 00:29:03.810 ======================================================== 00:29:03.810 Latency(us) 00:29:03.810 Device Information : IOPS MiB/s Average min max 00:29:03.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.99 9.87 12659.64 4918.46 51807.56 00:29:03.810 ======================================================== 00:29:03.811 Total : 78.99 9.87 12659.64 4918.46 51807.56 00:29:03.811 00:29:03.811 16:27:45 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:03.811 16:27:45 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:03.811 16:27:45 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.811 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.778 Initializing NVMe Controllers 00:29:13.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.778 Initialization complete. Launching workers. 00:29:13.778 ======================================================== 00:29:13.778 Latency(us) 00:29:13.778 Device Information : IOPS MiB/s Average min max 00:29:13.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7632.28 3.73 4192.34 298.98 11579.10 00:29:13.778 ======================================================== 00:29:13.778 Total : 7632.28 3.73 4192.34 298.98 11579.10 00:29:13.778 00:29:13.778 16:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:13.778 16:27:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.778 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.749 Initializing NVMe Controllers 00:29:23.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.749 Initialization complete. Launching workers. 00:29:23.749 ======================================================== 00:29:23.749 Latency(us) 00:29:23.749 Device Information : IOPS MiB/s Average min max 00:29:23.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3043.03 380.38 10518.61 716.39 31278.12 00:29:23.749 ======================================================== 00:29:23.749 Total : 3043.03 380.38 10518.61 716.39 31278.12 00:29:23.749 00:29:23.749 16:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:23.749 16:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.749 16:28:05 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.749 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.718 Initializing NVMe Controllers 00:29:33.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.718 Controller IO queue size 128, less than required. 00:29:33.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.718 Initialization complete. Launching workers. 00:29:33.718 ======================================================== 00:29:33.718 Latency(us) 00:29:33.718 Device Information : IOPS MiB/s Average min max 00:29:33.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12045.51 5.88 10630.58 1791.97 23881.79 00:29:33.718 ======================================================== 00:29:33.718 Total : 12045.51 5.88 10630.58 1791.97 23881.79 00:29:33.718 00:29:33.718 16:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:33.718 16:28:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.718 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.679 Initializing NVMe Controllers 00:29:43.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.679 Controller IO queue size 128, less than required. 00:29:43.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.679 Initialization complete. Launching workers. 00:29:43.679 ======================================================== 00:29:43.679 Latency(us) 00:29:43.679 Device Information : IOPS MiB/s Average min max 00:29:43.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1212.50 151.56 106068.32 15089.70 222325.65 00:29:43.679 ======================================================== 00:29:43.679 Total : 1212.50 151.56 106068.32 15089.70 222325.65 00:29:43.679 00:29:43.679 16:28:26 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.936 16:28:26 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6216955e-3c2e-4348-95c5-a6ca602eec8c 00:29:44.502 16:28:27 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:44.761 16:28:27 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b594e78b-9586-45a8-9577-a0716c18da5b 00:29:45.021 16:28:27 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.300 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.567 rmmod nvme_tcp 00:29:45.567 rmmod nvme_fabrics 00:29:45.567 rmmod nvme_keyring 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 418020 ']' 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 418020 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 418020 ']' 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 418020 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 418020 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 418020' 00:29:45.567 killing process with pid 418020 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 418020 00:29:45.567 16:28:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 418020 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.470 16:28:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.378 16:28:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.378 00:29:49.378 real 1m31.295s 00:29:49.378 user 5m36.267s 00:29:49.378 sys 0m18.012s 00:29:49.378 16:28:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:49.378 16:28:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.378 ************************************ 00:29:49.378 END TEST nvmf_perf 00:29:49.378 ************************************ 00:29:49.378 16:28:32 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:49.378 16:28:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:49.378 16:28:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:49.378 16:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.378 ************************************ 00:29:49.378 START TEST nvmf_fio_host 00:29:49.378 ************************************ 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:49.378 * Looking for test storage... 00:29:49.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.378 16:28:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:51.284 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:51.284 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:51.284 Found net devices under 0000:84:00.0: cvl_0_0 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:51.284 Found net devices under 0000:84:00.1: cvl_0_1 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.284 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:29:51.285 00:29:51.285 --- 10.0.0.2 ping statistics --- 00:29:51.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.285 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:51.285 00:29:51.285 --- 10.0.0.1 ping statistics --- 00:29:51.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.285 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=430625 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 430625 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 430625 ']' 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:51.285 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.544 [2024-07-15 16:28:34.301660] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:51.544 [2024-07-15 16:28:34.301745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.544 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.544 [2024-07-15 16:28:34.365564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.544 [2024-07-15 16:28:34.456299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.544 [2024-07-15 16:28:34.456361] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.544 [2024-07-15 16:28:34.456375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.544 [2024-07-15 16:28:34.456386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.544 [2024-07-15 16:28:34.456396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.544 [2024-07-15 16:28:34.456476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.544 [2024-07-15 16:28:34.456541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.544 [2024-07-15 16:28:34.456571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.544 [2024-07-15 16:28:34.456573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.803 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:51.803 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:51.803 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.061 [2024-07-15 16:28:34.853363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.061 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:52.061 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.061 16:28:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.061 16:28:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:52.319 Malloc1 00:29:52.319 16:28:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.577 16:28:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:52.835 16:28:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.093 [2024-07-15 16:28:35.887345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.093 16:28:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:53.351 16:28:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.609 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:53.609 fio-3.35 00:29:53.609 Starting 1 thread 00:29:53.609 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.138 00:29:56.138 test: (groupid=0, jobs=1): err= 0: pid=430984: Mon Jul 15 16:28:38 2024 00:29:56.138 read: IOPS=9345, BW=36.5MiB/s (38.3MB/s)(73.2MiB/2006msec) 00:29:56.138 slat (usec): min=2, max=133, avg= 2.93, stdev= 2.00 00:29:56.138 clat (usec): min=3131, max=12555, avg=7493.24, stdev=565.48 00:29:56.138 lat (usec): min=3153, max=12557, avg=7496.18, stdev=565.39 00:29:56.138 clat percentiles (usec): 00:29:56.139 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 7046], 00:29:56.139 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:29:56.139 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:29:56.139 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[11469], 00:29:56.139 | 99.99th=[12518] 00:29:56.139 bw ( KiB/s): min=36112, max=38080, per=99.94%, avg=37362.00, stdev=860.30, samples=4 00:29:56.139 iops : min= 9028, max= 9520, avg=9340.50, stdev=215.07, samples=4 00:29:56.139 write: IOPS=9350, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec); 0 zone resets 00:29:56.139 slat (nsec): min=2318, max=98355, avg=3033.69, stdev=1810.02 00:29:56.139 clat (usec): min=1099, max=11588, avg=6116.74, stdev=499.30 00:29:56.139 lat (usec): min=1105, max=11591, avg=6119.77, stdev=499.25 00:29:56.139 clat percentiles (usec): 00:29:56.139 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5735], 00:29:56.139 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:29:56.139 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:29:56.139 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9765], 99.95th=[10814], 00:29:56.139 | 99.99th=[11600] 00:29:56.139 bw ( KiB/s): min=37072, max=37760, per=99.98%, avg=37396.00, stdev=300.58, samples=4 00:29:56.139 iops : min= 9268, max= 9440, avg=9349.00, stdev=75.14, samples=4 00:29:56.139 lat (msec) : 2=0.03%, 4=0.11%, 10=99.76%, 20=0.11% 00:29:56.139 cpu : usr=67.73%, sys=30.02%, ctx=33, majf=0, minf=6 00:29:56.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:56.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:56.139 issued rwts: total=18748,18758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:56.139 00:29:56.139 Run status group 0 (all jobs): 00:29:56.139 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.2MiB (76.8MB), run=2006-2006msec 00:29:56.139 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.139 16:28:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.139 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:56.139 fio-3.35 00:29:56.139 Starting 1 thread 00:29:56.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.668 00:29:58.668 test: (groupid=0, jobs=1): err= 0: pid=431319: Mon Jul 15 16:28:41 2024 00:29:58.668 read: IOPS=7787, BW=122MiB/s (128MB/s)(244MiB/2004msec) 00:29:58.668 slat (usec): min=3, max=189, avg= 4.74, stdev= 5.27 00:29:58.668 clat (usec): min=2267, max=19640, avg=9636.87, stdev=2400.18 00:29:58.668 lat (usec): min=2271, max=19643, avg=9641.62, stdev=2400.40 00:29:58.668 clat percentiles (usec): 00:29:58.668 | 1.00th=[ 4883], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7439], 00:29:58.668 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10290], 00:29:58.668 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12649], 95.00th=[13960], 00:29:58.668 | 99.00th=[15795], 99.50th=[16319], 99.90th=[19268], 99.95th=[19530], 00:29:58.668 | 99.99th=[19530] 00:29:58.668 bw ( KiB/s): min=51136, max=75744, per=51.10%, avg=63664.00, stdev=10068.24, samples=4 00:29:58.668 iops : min= 3196, max= 4734, avg=3979.00, stdev=629.27, samples=4 00:29:58.668 write: IOPS=4610, BW=72.0MiB/s (75.5MB/s)(131MiB/1814msec); 0 zone resets 00:29:58.668 slat (usec): min=31, max=367, avg=40.86, stdev=13.98 00:29:58.668 clat (usec): min=4740, max=21719, avg=11928.50, stdev=2154.09 00:29:58.668 lat (usec): min=4775, max=21754, avg=11969.35, stdev=2155.49 00:29:58.668 clat percentiles (usec): 00:29:58.668 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:29:58.668 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:29:58.668 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15008], 95.00th=[15664], 00:29:58.668 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19268], 99.95th=[20317], 00:29:58.668 | 99.99th=[21627] 00:29:58.668 bw ( KiB/s): min=52864, max=78080, per=89.97%, avg=66368.00, stdev=10352.71, samples=4 00:29:58.668 iops : min= 3304, max= 4880, avg=4148.00, stdev=647.04, samples=4 00:29:58.668 lat (msec) : 4=0.16%, 10=42.11%, 20=57.71%, 50=0.02% 00:29:58.668 cpu : usr=68.15%, sys=21.07%, ctx=173, majf=0, minf=2 00:29:58.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:58.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.668 issued rwts: total=15606,8363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.668 00:29:58.668 Run status group 0 (all jobs): 00:29:58.668 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=244MiB (256MB), run=2004-2004msec 00:29:58.668 WRITE: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=131MiB (137MB), run=1814-1814msec 00:29:58.668 16:28:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.668 16:28:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:58.668 16:28:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:58.668 16:28:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:58.668 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:82:00.0 00:29:58.926 16:28:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 -i 10.0.0.2 00:30:02.221 Nvme0n1 00:30:02.221 16:28:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f5f21b54-2065-4b1b-a7ba-d49257ded13e 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f5f21b54-2065-4b1b-a7ba-d49257ded13e 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=f5f21b54-2065-4b1b-a7ba-d49257ded13e 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:04.740 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:04.997 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:04.997 { 00:30:04.997 "uuid": "f5f21b54-2065-4b1b-a7ba-d49257ded13e", 00:30:04.997 "name": "lvs_0", 00:30:04.997 "base_bdev": "Nvme0n1", 00:30:04.997 "total_data_clusters": 930, 00:30:04.997 "free_clusters": 930, 00:30:04.997 "block_size": 512, 00:30:04.997 "cluster_size": 1073741824 00:30:04.997 } 00:30:04.997 ]' 00:30:04.997 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="f5f21b54-2065-4b1b-a7ba-d49257ded13e") .free_clusters' 00:30:04.997 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:04.997 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f5f21b54-2065-4b1b-a7ba-d49257ded13e") .cluster_size' 00:30:05.254 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:05.254 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:05.254 16:28:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:05.254 952320 00:30:05.254 16:28:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:05.511 566d15c3-33f4-45db-a040-2937ea591402 00:30:05.511 16:28:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:05.769 16:28:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:06.027 16:28:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:06.295 16:28:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.558 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:06.558 fio-3.35 00:30:06.558 Starting 1 thread 00:30:06.558 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.083 00:30:09.083 test: (groupid=0, jobs=1): err= 0: pid=432607: Mon Jul 15 16:28:51 2024 00:30:09.083 read: IOPS=6210, BW=24.3MiB/s (25.4MB/s)(48.7MiB/2008msec) 00:30:09.083 slat (usec): min=2, max=397, avg= 3.22, stdev= 4.27 00:30:09.083 clat (usec): min=700, max=171081, avg=11277.67, stdev=11469.21 00:30:09.083 lat (usec): min=712, max=171121, avg=11280.89, stdev=11469.52 00:30:09.083 clat percentiles (msec): 00:30:09.083 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:30:09.083 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:09.083 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:30:09.083 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:09.083 | 99.99th=[ 171] 00:30:09.083 bw ( KiB/s): min=17280, max=27504, per=99.94%, avg=24828.00, stdev=5033.38, samples=4 00:30:09.083 iops : min= 4320, max= 6876, avg=6207.00, stdev=1258.34, samples=4 00:30:09.083 write: IOPS=6205, BW=24.2MiB/s (25.4MB/s)(48.7MiB/2008msec); 0 zone resets 00:30:09.083 slat (usec): min=2, max=181, avg= 3.34, stdev= 2.58 00:30:09.083 clat (usec): min=322, max=169137, avg=9221.62, stdev=10749.49 00:30:09.083 lat (usec): min=326, max=169145, avg=9224.96, stdev=10749.89 00:30:09.083 clat percentiles (msec): 00:30:09.083 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:09.084 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:09.084 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:09.084 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:30:09.084 | 99.99th=[ 169] 00:30:09.084 bw ( KiB/s): min=18344, max=27024, per=99.84%, avg=24782.00, stdev=4292.85, samples=4 00:30:09.084 iops : min= 4586, max= 6756, avg=6195.50, stdev=1073.21, samples=4 00:30:09.084 lat (usec) : 500=0.01%, 750=0.01% 00:30:09.084 lat (msec) : 2=0.04%, 4=0.10%, 10=63.49%, 20=35.85%, 250=0.51% 00:30:09.084 cpu : usr=61.34%, sys=36.62%, ctx=78, majf=0, minf=6 00:30:09.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:09.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:09.084 issued rwts: total=12471,12461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:09.084 00:30:09.084 Run status group 0 (all jobs): 00:30:09.084 READ: bw=24.3MiB/s (25.4MB/s), 24.3MiB/s-24.3MiB/s (25.4MB/s-25.4MB/s), io=48.7MiB (51.1MB), run=2008-2008msec 00:30:09.084 WRITE: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=48.7MiB (51.0MB), run=2008-2008msec 00:30:09.084 16:28:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:09.084 16:28:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1499b10d-e302-486b-9c1d-314cf2f3a906 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1499b10d-e302-486b-9c1d-314cf2f3a906 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=1499b10d-e302-486b-9c1d-314cf2f3a906 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:10.460 { 00:30:10.460 "uuid": "f5f21b54-2065-4b1b-a7ba-d49257ded13e", 00:30:10.460 "name": "lvs_0", 00:30:10.460 "base_bdev": "Nvme0n1", 00:30:10.460 "total_data_clusters": 930, 00:30:10.460 "free_clusters": 0, 00:30:10.460 "block_size": 512, 00:30:10.460 "cluster_size": 1073741824 00:30:10.460 }, 00:30:10.460 { 00:30:10.460 "uuid": "1499b10d-e302-486b-9c1d-314cf2f3a906", 00:30:10.460 "name": "lvs_n_0", 00:30:10.460 "base_bdev": "566d15c3-33f4-45db-a040-2937ea591402", 00:30:10.460 "total_data_clusters": 237847, 00:30:10.460 "free_clusters": 237847, 00:30:10.460 "block_size": 512, 00:30:10.460 "cluster_size": 4194304 00:30:10.460 } 00:30:10.460 ]' 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="1499b10d-e302-486b-9c1d-314cf2f3a906") .free_clusters' 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="1499b10d-e302-486b-9c1d-314cf2f3a906") .cluster_size' 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:10.460 951388 00:30:10.460 16:28:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:11.392 1d733fc4-600a-4f9c-af3e-96b9134cd76f 00:30:11.392 16:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:11.392 16:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:11.650 16:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:11.907 16:28:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.165 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:12.165 fio-3.35 00:30:12.165 Starting 1 thread 00:30:12.165 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.689 00:30:14.690 test: (groupid=0, jobs=1): err= 0: pid=433449: Mon Jul 15 16:28:57 2024 00:30:14.690 read: IOPS=6001, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2009msec) 00:30:14.690 slat (usec): min=2, max=285, avg= 3.66, stdev= 3.99 00:30:14.690 clat (usec): min=4140, max=19034, avg=11697.09, stdev=976.52 00:30:14.690 lat (usec): min=4177, max=19053, avg=11700.75, stdev=976.39 00:30:14.690 clat percentiles (usec): 00:30:14.690 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:30:14.690 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:30:14.690 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:30:14.690 | 99.00th=[13829], 99.50th=[14222], 99.90th=[16319], 99.95th=[17433], 00:30:14.690 | 99.99th=[19006] 00:30:14.690 bw ( KiB/s): min=22864, max=24592, per=99.89%, avg=23982.00, stdev=764.00, samples=4 00:30:14.690 iops : min= 5716, max= 6148, avg=5995.50, stdev=191.00, samples=4 00:30:14.690 write: IOPS=5983, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2009msec); 0 zone resets 00:30:14.690 slat (usec): min=2, max=158, avg= 3.80, stdev= 2.87 00:30:14.690 clat (usec): min=1989, max=17748, avg=9512.43, stdev=860.74 00:30:14.690 lat (usec): min=1997, max=17751, avg=9516.23, stdev=860.74 00:30:14.690 clat percentiles (usec): 00:30:14.690 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8848], 00:30:14.690 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:30:14.690 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:30:14.690 | 99.00th=[11338], 99.50th=[11600], 99.90th=[16319], 99.95th=[17433], 00:30:14.690 | 99.99th=[17695] 00:30:14.690 bw ( KiB/s): min=23752, max=24064, per=99.95%, avg=23922.00, stdev=138.62, samples=4 00:30:14.690 iops : min= 5938, max= 6016, avg=5980.50, stdev=34.66, samples=4 00:30:14.690 lat (msec) : 2=0.01%, 4=0.03%, 10=38.37%, 20=61.60% 00:30:14.690 cpu : usr=64.59%, sys=32.47%, ctx=67, majf=0, minf=6 00:30:14.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:14.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:14.690 issued rwts: total=12058,12021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:14.690 00:30:14.690 Run status group 0 (all jobs): 00:30:14.690 READ: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2009-2009msec 00:30:14.690 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.2MB), run=2009-2009msec 00:30:14.690 16:28:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:14.987 16:28:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:14.987 16:28:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:19.197 16:29:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:19.197 16:29:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:22.475 16:29:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:22.475 16:29:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.372 rmmod nvme_tcp 00:30:24.372 rmmod nvme_fabrics 00:30:24.372 rmmod nvme_keyring 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 430625 ']' 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 430625 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 430625 ']' 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 430625 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 430625 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 430625' 00:30:24.372 killing process with pid 430625 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 430625 00:30:24.372 16:29:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 430625 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.372 16:29:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.276 16:29:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.276 00:30:26.276 real 0m37.199s 00:30:26.276 user 2m22.990s 00:30:26.276 sys 0m6.839s 00:30:26.276 16:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:26.276 16:29:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.276 ************************************ 00:30:26.276 END TEST nvmf_fio_host 00:30:26.276 ************************************ 00:30:26.534 16:29:09 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:26.534 16:29:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:26.534 16:29:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:26.534 16:29:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.534 ************************************ 00:30:26.534 START TEST nvmf_failover 00:30:26.534 ************************************ 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:26.534 * Looking for test storage... 00:30:26.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.534 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.535 16:29:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.436 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:28.437 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:28.437 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:28.437 Found net devices under 0000:84:00.0: cvl_0_0 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:28.437 Found net devices under 0000:84:00.1: cvl_0_1 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:28.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:30:28.437 00:30:28.437 --- 10.0.0.2 ping statistics --- 00:30:28.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.437 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:30:28.437 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:30:28.695 00:30:28.696 --- 10.0.0.1 ping statistics --- 00:30:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.696 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=436721 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 436721 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 436721 ']' 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:28.696 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.696 [2024-07-15 16:29:11.490419] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:28.696 [2024-07-15 16:29:11.490508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.696 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.696 [2024-07-15 16:29:11.556249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.696 [2024-07-15 16:29:11.640874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.696 [2024-07-15 16:29:11.640928] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.696 [2024-07-15 16:29:11.640953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.696 [2024-07-15 16:29:11.640967] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.696 [2024-07-15 16:29:11.640979] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.696 [2024-07-15 16:29:11.641075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.696 [2024-07-15 16:29:11.641191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.696 [2024-07-15 16:29:11.641194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.954 16:29:11 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.211 [2024-07-15 16:29:12.023104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.211 16:29:12 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:29.469 Malloc0 00:30:29.469 16:29:12 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.727 16:29:12 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:29.984 16:29:12 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.242 [2024-07-15 16:29:13.115795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.242 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.499 [2024-07-15 16:29:13.372581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.499 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.756 [2024-07-15 16:29:13.661516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=437008 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 437008 /var/tmp/bdevperf.sock 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 437008 ']' 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:30.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:30.756 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.014 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:31.014 16:29:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:31.014 16:29:13 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.579 NVMe0n1 00:30:31.579 16:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.146 00:30:32.146 16:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=437144 00:30:32.146 16:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:32.146 16:29:14 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:33.080 16:29:15 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.338 [2024-07-15 16:29:16.130974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.338 [2024-07-15 16:29:16.131199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 [2024-07-15 16:29:16.131450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a370 is same with the state(5) to be set 00:30:33.339 16:29:16 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:36.618 16:29:19 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.618 00:30:36.618 16:29:19 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:36.876 [2024-07-15 16:29:19.759580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 [2024-07-15 16:29:19.759803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b210 is same with the state(5) to be set 00:30:36.876 16:29:19 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:40.155 16:29:22 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.155 [2024-07-15 16:29:23.055644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.155 16:29:23 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:41.529 16:29:24 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.529 [2024-07-15 16:29:24.324841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.529 [2024-07-15 16:29:24.324916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.529 [2024-07-15 16:29:24.324931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.529 [2024-07-15 16:29:24.324943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.529 [2024-07-15 16:29:24.324955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.530 [2024-07-15 16:29:24.324967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.530 [2024-07-15 16:29:24.324980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.530 [2024-07-15 16:29:24.324992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.530 [2024-07-15 16:29:24.325004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162bf50 is same with the state(5) to be set 00:30:41.530 16:29:24 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 437144 00:30:48.097 0 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 437008 ']' 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 437008' 00:30:48.097 killing process with pid 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 437008 00:30:48.097 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:48.097 [2024-07-15 16:29:13.726380] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:48.097 [2024-07-15 16:29:13.726476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid437008 ] 00:30:48.097 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.097 [2024-07-15 16:29:13.787022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.097 [2024-07-15 16:29:13.873086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.097 Running I/O for 15 seconds... 00:30:48.097 [2024-07-15 16:29:16.133142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.097 [2024-07-15 16:29:16.133866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.097 [2024-07-15 16:29:16.133882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.133896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.133911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.133924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.133940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.133958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.133974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.133989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.098 [2024-07-15 16:29:16.134266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.134981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.134994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.098 [2024-07-15 16:29:16.135171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.098 [2024-07-15 16:29:16.135184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.099 [2024-07-15 16:29:16.135692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.135790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.135804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.135837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.135849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.135862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.135888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.135899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.135912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.135942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.135953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.135966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.135979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.135990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:30:48.099 [2024-07-15 16:29:16.136469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.099 [2024-07-15 16:29:16.136482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.099 [2024-07-15 16:29:16.136492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.099 [2024-07-15 16:29:16.136503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.136950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.136961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.136973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.136992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.100 [2024-07-15 16:29:16.137742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:30:48.100 [2024-07-15 16:29:16.137773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.100 [2024-07-15 16:29:16.137794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.100 [2024-07-15 16:29:16.137806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.137818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.137836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.137849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.137861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.137872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.137886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.137899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.137910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.137922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.137952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.137963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.137975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.137987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.138001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.138012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.138024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.138037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.138077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.138088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.138107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.138120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.138133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.101 [2024-07-15 16:29:16.138143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.101 [2024-07-15 16:29:16.151639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:30:48.101 [2024-07-15 16:29:16.151668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.151758] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb154a0 was disconnected and freed. reset controller. 00:30:48.101 [2024-07-15 16:29:16.151801] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:48.101 [2024-07-15 16:29:16.151841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.101 [2024-07-15 16:29:16.151860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.151876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.101 [2024-07-15 16:29:16.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.151904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.101 [2024-07-15 16:29:16.151918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.151933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.101 [2024-07-15 16:29:16.151945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:16.151959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.101 [2024-07-15 16:29:16.152044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf66d0 (9): Bad file descriptor 00:30:48.101 [2024-07-15 16:29:16.155309] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.101 [2024-07-15 16:29:16.192653] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.101 [2024-07-15 16:29:19.760350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.101 [2024-07-15 16:29:19.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.101 [2024-07-15 16:29:19.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.760935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.760950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.760963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.760978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.102 [2024-07-15 16:29:19.761544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.761983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.761997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.102 [2024-07-15 16:29:19.762182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.102 [2024-07-15 16:29:19.762196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.762983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.762997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.103 [2024-07-15 16:29:19.763439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.103 [2024-07-15 16:29:19.763452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:19.763918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.763958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.763975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91072 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.763988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91080 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91088 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91096 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91104 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91112 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91120 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90416 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.104 [2024-07-15 16:29:19.764360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.104 [2024-07-15 16:29:19.764371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90424 len:8 PRP1 0x0 PRP2 0x0 00:30:48.104 [2024-07-15 16:29:19.764383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764443] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcbfe30 was disconnected and freed. reset controller. 00:30:48.104 [2024-07-15 16:29:19.764462] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:48.104 [2024-07-15 16:29:19.764497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.104 [2024-07-15 16:29:19.764515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.104 [2024-07-15 16:29:19.764542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.104 [2024-07-15 16:29:19.764577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.104 [2024-07-15 16:29:19.764607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:19.764621] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.104 [2024-07-15 16:29:19.764660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf66d0 (9): Bad file descriptor 00:30:48.104 [2024-07-15 16:29:19.767893] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.104 [2024-07-15 16:29:19.808363] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.104 [2024-07-15 16:29:24.327574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:24.327619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:24.327663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:24.327679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:24.327712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:24.327727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:24.327750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:24.327766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:24.327782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.104 [2024-07-15 16:29:24.327796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.104 [2024-07-15 16:29:24.327812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.327983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.327999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.105 [2024-07-15 16:29:24.328878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.105 [2024-07-15 16:29:24.328892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.328907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.328920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.328935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.328948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.328963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.328992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.106 [2024-07-15 16:29:24.329767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.106 [2024-07-15 16:29:24.329796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.329986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.329999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.330014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.330027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.330042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.330055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.330070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.330083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.330098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.330112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.106 [2024-07-15 16:29:24.330129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.106 [2024-07-15 16:29:24.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.107 [2024-07-15 16:29:24.330497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.330545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36144 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.330558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.107 [2024-07-15 16:29:24.330627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.107 [2024-07-15 16:29:24.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.107 [2024-07-15 16:29:24.330691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.107 [2024-07-15 16:29:24.330718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.330731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf66d0 is same with the state(5) to be set 00:30:48.107 [2024-07-15 16:29:24.330978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.330998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36152 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36160 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36168 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36176 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36184 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36192 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36200 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36208 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36216 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36224 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36232 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36240 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36248 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.107 [2024-07-15 16:29:24.331685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.107 [2024-07-15 16:29:24.331696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.107 [2024-07-15 16:29:24.331708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35264 len:8 PRP1 0x0 PRP2 0x0 00:30:48.107 [2024-07-15 16:29:24.331720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.331765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35272 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.331778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.331815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35280 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.331828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.331863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35288 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.331876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.331911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35296 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.331924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.331959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35304 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.331972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.331985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.331996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35312 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35320 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35328 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35336 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35344 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35352 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35360 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35368 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35376 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35384 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35392 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35400 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35408 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35416 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35432 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35448 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.108 [2024-07-15 16:29:24.332919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.108 [2024-07-15 16:29:24.332931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35456 len:8 PRP1 0x0 PRP2 0x0 00:30:48.108 [2024-07-15 16:29:24.332945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.108 [2024-07-15 16:29:24.332964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.332976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.332987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35464 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35472 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35480 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35488 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35496 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35504 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35512 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35520 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35528 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35536 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35544 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35552 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35560 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35568 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35576 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35584 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35592 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35600 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35608 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35616 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.333959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.333969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.333980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35624 len:8 PRP1 0x0 PRP2 0x0 00:30:48.109 [2024-07-15 16:29:24.333993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.109 [2024-07-15 16:29:24.334006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.109 [2024-07-15 16:29:24.334016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.109 [2024-07-15 16:29:24.334027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35632 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.334064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.334074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35640 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.334110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.334121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35648 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.334162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35656 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.334209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.334220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.334257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.334268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35672 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.334280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.334292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.340990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35680 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35688 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35696 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35704 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35712 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35720 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35728 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35736 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35744 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35752 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35760 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35768 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35776 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.111 [2024-07-15 16:29:24.341643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35784 len:8 PRP1 0x0 PRP2 0x0 00:30:48.111 [2024-07-15 16:29:24.341655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.111 [2024-07-15 16:29:24.341668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.111 [2024-07-15 16:29:24.341678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35792 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35800 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35808 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35816 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35824 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.341954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35832 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.341967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.341980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.341991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35840 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35848 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35856 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35864 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35872 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35880 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35888 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35896 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35904 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35912 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35920 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35928 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35936 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35944 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35240 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35248 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35952 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.112 [2024-07-15 16:29:24.342810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.112 [2024-07-15 16:29:24.342821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.112 [2024-07-15 16:29:24.342833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35960 len:8 PRP1 0x0 PRP2 0x0 00:30:48.112 [2024-07-15 16:29:24.342846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.342859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.342870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.342882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35968 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.342898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.342912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.342923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.342935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35976 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.342948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.342961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.342972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.342989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35984 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35992 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36000 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36008 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36016 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36024 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36032 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36040 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36048 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36056 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36064 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36072 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36080 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36088 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36096 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36104 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36112 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36120 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36128 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36136 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.343923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.113 [2024-07-15 16:29:24.343935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.113 [2024-07-15 16:29:24.343946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36144 len:8 PRP1 0x0 PRP2 0x0 00:30:48.113 [2024-07-15 16:29:24.343958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.113 [2024-07-15 16:29:24.344024] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb19e20 was disconnected and freed. reset controller. 00:30:48.113 [2024-07-15 16:29:24.344043] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:48.113 [2024-07-15 16:29:24.344060] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.113 [2024-07-15 16:29:24.344115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf66d0 (9): Bad file descriptor 00:30:48.113 [2024-07-15 16:29:24.347342] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.114 [2024-07-15 16:29:24.383417] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.114 00:30:48.114 Latency(us) 00:30:48.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.114 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:48.114 Verification LBA range: start 0x0 length 0x4000 00:30:48.114 NVMe0n1 : 15.01 8964.21 35.02 283.41 0.00 13815.07 794.93 28544.57 00:30:48.114 =================================================================================================================== 00:30:48.114 Total : 8964.21 35.02 283.41 0.00 13815.07 794.93 28544.57 00:30:48.114 Received shutdown signal, test time was about 15.000000 seconds 00:30:48.114 00:30:48.114 Latency(us) 00:30:48.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.114 =================================================================================================================== 00:30:48.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=438975 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 438975 /var/tmp/bdevperf.sock 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 438975 ']' 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:48.114 [2024-07-15 16:29:30.800300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.114 16:29:30 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:48.406 [2024-07-15 16:29:31.093199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:48.406 16:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.668 NVMe0n1 00:30:48.668 16:29:31 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.239 00:30:49.239 16:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.497 00:30:49.497 16:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.497 16:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:49.754 16:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.013 16:29:32 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:53.293 16:29:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.293 16:29:35 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:53.293 16:29:36 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=439648 00:30:53.293 16:29:36 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.293 16:29:36 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 439648 00:30:54.668 0 00:30:54.668 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:54.668 [2024-07-15 16:29:30.309800] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:54.668 [2024-07-15 16:29:30.309892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438975 ] 00:30:54.668 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.668 [2024-07-15 16:29:30.372816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.668 [2024-07-15 16:29:30.461460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.668 [2024-07-15 16:29:32.923164] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:54.668 [2024-07-15 16:29:32.923265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.668 [2024-07-15 16:29:32.923288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.668 [2024-07-15 16:29:32.923304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.668 [2024-07-15 16:29:32.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.668 [2024-07-15 16:29:32.923340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.668 [2024-07-15 16:29:32.923354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.668 [2024-07-15 16:29:32.923367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:54.668 [2024-07-15 16:29:32.923380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:54.668 [2024-07-15 16:29:32.923393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:54.668 [2024-07-15 16:29:32.923448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:54.668 [2024-07-15 16:29:32.923478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16786d0 (9): Bad file descriptor 00:30:54.668 [2024-07-15 16:29:32.932022] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:54.668 Running I/O for 1 seconds... 00:30:54.668 00:30:54.668 Latency(us) 00:30:54.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.668 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:54.668 Verification LBA range: start 0x0 length 0x4000 00:30:54.668 NVMe0n1 : 1.05 8729.90 34.10 0.00 0.00 14041.21 2949.12 43690.67 00:30:54.668 =================================================================================================================== 00:30:54.668 Total : 8729.90 34.10 0.00 0.00 14041.21 2949.12 43690.67 00:30:54.668 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.668 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:54.668 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.926 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.926 16:29:37 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:55.183 16:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.441 16:29:38 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 438975 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 438975 ']' 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 438975 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 438975 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 438975' 00:30:58.731 killing process with pid 438975 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 438975 00:30:58.731 16:29:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 438975 00:30:58.989 16:29:41 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:58.989 16:29:41 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.247 rmmod nvme_tcp 00:30:59.247 rmmod nvme_fabrics 00:30:59.247 rmmod nvme_keyring 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 436721 ']' 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 436721 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 436721 ']' 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 436721 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 436721 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 436721' 00:30:59.247 killing process with pid 436721 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 436721 00:30:59.247 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 436721 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.505 16:29:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.035 16:29:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:02.035 00:31:02.035 real 0m35.197s 00:31:02.035 user 2m4.330s 00:31:02.035 sys 0m6.141s 00:31:02.035 16:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.035 16:29:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:02.035 ************************************ 00:31:02.035 END TEST nvmf_failover 00:31:02.035 ************************************ 00:31:02.035 16:29:44 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.035 16:29:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:02.035 16:29:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.035 16:29:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:02.035 ************************************ 00:31:02.035 START TEST nvmf_host_discovery 00:31:02.035 ************************************ 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:02.035 * Looking for test storage... 00:31:02.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.035 16:29:44 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.036 16:29:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:03.944 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:03.944 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:03.944 Found net devices under 0000:84:00.0: cvl_0_0 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:03.944 Found net devices under 0000:84:00.1: cvl_0_1 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:03.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:31:03.944 00:31:03.944 --- 10.0.0.2 ping statistics --- 00:31:03.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.944 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:03.944 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:31:03.945 00:31:03.945 --- 10.0.0.1 ping statistics --- 00:31:03.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.945 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=442263 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 442263 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 442263 ']' 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:03.945 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.945 [2024-07-15 16:29:46.689417] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:03.945 [2024-07-15 16:29:46.689488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.945 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.945 [2024-07-15 16:29:46.753097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.945 [2024-07-15 16:29:46.837273] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.945 [2024-07-15 16:29:46.837325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.945 [2024-07-15 16:29:46.837352] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.945 [2024-07-15 16:29:46.837366] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.945 [2024-07-15 16:29:46.837377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.945 [2024-07-15 16:29:46.837411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 [2024-07-15 16:29:46.986141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 [2024-07-15 16:29:46.994340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.203 16:29:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 null0 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 null1 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=442284 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 442284 /tmp/host.sock 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 442284 ']' 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:04.203 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:04.203 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.203 [2024-07-15 16:29:47.066812] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:04.203 [2024-07-15 16:29:47.066881] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442284 ] 00:31:04.203 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.203 [2024-07-15 16:29:47.128837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.462 [2024-07-15 16:29:47.220998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.462 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.719 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.720 [2024-07-15 16:29:47.619996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:04.720 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:05.038 16:29:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:05.604 [2024-07-15 16:29:48.394906] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:05.604 [2024-07-15 16:29:48.394947] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:05.604 [2024-07-15 16:29:48.394971] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.604 [2024-07-15 16:29:48.481216] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:05.604 [2024-07-15 16:29:48.544615] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:05.604 [2024-07-15 16:29:48.544638] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.861 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.120 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.121 16:29:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.379 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.380 16:29:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.314 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.572 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:07.572 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.573 [2024-07-15 16:29:50.315804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.573 [2024-07-15 16:29:50.316420] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:07.573 [2024-07-15 16:29:50.316462] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.573 [2024-07-15 16:29:50.402643] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:07.573 16:29:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:07.831 [2024-07-15 16:29:50.661919] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:07.831 [2024-07-15 16:29:50.661950] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:07.831 [2024-07-15 16:29:50.661959] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:08.771 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 [2024-07-15 16:29:51.555824] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:08.772 [2024-07-15 16:29:51.555862] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:08.772 [2024-07-15 16:29:51.561388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.772 [2024-07-15 16:29:51.561426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.772 [2024-07-15 16:29:51.561456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.772 [2024-07-15 16:29:51.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.772 [2024-07-15 16:29:51.561496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.772 [2024-07-15 16:29:51.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.772 [2024-07-15 16:29:51.561528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:08.772 [2024-07-15 16:29:51.561544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:08.772 [2024-07-15 16:29:51.561559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:08.772 [2024-07-15 16:29:51.571391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.772 [2024-07-15 16:29:51.581437] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.581692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.581725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.581754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.581804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.581826] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.581840] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.581857] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.581877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.772 [2024-07-15 16:29:51.591522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.591688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.591719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.591744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.591786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.591806] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.591825] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.591838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.591857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.772 [2024-07-15 16:29:51.601601] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.601830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.601858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.601874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.601895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.601915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.601929] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.601941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.601959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.772 [2024-07-15 16:29:51.611683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.611864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.611892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.611908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.611930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.611950] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.611964] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.611977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.612001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.772 [2024-07-15 16:29:51.621763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.621900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.621927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.621943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.621965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.621986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.622000] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.622013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.622047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.772 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.772 [2024-07-15 16:29:51.631833] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.772 [2024-07-15 16:29:51.631981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.772 [2024-07-15 16:29:51.632007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.772 [2024-07-15 16:29:51.632022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.772 [2024-07-15 16:29:51.632062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.772 [2024-07-15 16:29:51.632086] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.772 [2024-07-15 16:29:51.632102] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.772 [2024-07-15 16:29:51.632117] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.772 [2024-07-15 16:29:51.632139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.773 [2024-07-15 16:29:51.641900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:08.773 [2024-07-15 16:29:51.642083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.773 [2024-07-15 16:29:51.642114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ece0 with addr=10.0.0.2, port=4420 00:31:08.773 [2024-07-15 16:29:51.642133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ece0 is same with the state(5) to be set 00:31:08.773 [2024-07-15 16:29:51.642159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ece0 (9): Bad file descriptor 00:31:08.773 [2024-07-15 16:29:51.642182] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:08.773 [2024-07-15 16:29:51.642199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:08.773 [2024-07-15 16:29:51.642215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:08.773 [2024-07-15 16:29:51.642237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:08.773 [2024-07-15 16:29:51.642293] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:08.773 [2024-07-15 16:29:51.642324] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.773 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.032 16:29:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.967 [2024-07-15 16:29:52.895483] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.967 [2024-07-15 16:29:52.895516] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.967 [2024-07-15 16:29:52.895543] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:10.226 [2024-07-15 16:29:52.982829] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:10.226 [2024-07-15 16:29:53.090207] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:10.226 [2024-07-15 16:29:53.090254] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 request: 00:31:10.226 { 00:31:10.226 "name": "nvme", 00:31:10.226 "trtype": "tcp", 00:31:10.226 "traddr": "10.0.0.2", 00:31:10.226 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:10.226 "adrfam": "ipv4", 00:31:10.226 "trsvcid": "8009", 00:31:10.226 "wait_for_attach": true, 00:31:10.226 "method": "bdev_nvme_start_discovery", 00:31:10.226 "req_id": 1 00:31:10.226 } 00:31:10.226 Got JSON-RPC error response 00:31:10.226 response: 00:31:10.226 { 00:31:10.226 "code": -17, 00:31:10.226 "message": "File exists" 00:31:10.226 } 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 request: 00:31:10.226 { 00:31:10.226 "name": "nvme_second", 00:31:10.226 "trtype": "tcp", 00:31:10.226 "traddr": "10.0.0.2", 00:31:10.226 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:10.226 "adrfam": "ipv4", 00:31:10.226 "trsvcid": "8009", 00:31:10.226 "wait_for_attach": true, 00:31:10.226 "method": "bdev_nvme_start_discovery", 00:31:10.226 "req_id": 1 00:31:10.226 } 00:31:10.226 Got JSON-RPC error response 00:31:10.226 response: 00:31:10.226 { 00:31:10.226 "code": -17, 00:31:10.226 "message": "File exists" 00:31:10.226 } 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.226 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.486 16:29:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.420 [2024-07-15 16:29:54.278432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.420 [2024-07-15 16:29:54.278480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fea20 with addr=10.0.0.2, port=8010 00:31:11.420 [2024-07-15 16:29:54.278505] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:11.420 [2024-07-15 16:29:54.278521] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:11.420 [2024-07-15 16:29:54.278535] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:12.354 [2024-07-15 16:29:55.280805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.354 [2024-07-15 16:29:55.280841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ad50 with addr=10.0.0.2, port=8010 00:31:12.354 [2024-07-15 16:29:55.280864] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:12.354 [2024-07-15 16:29:55.280877] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:12.354 [2024-07-15 16:29:55.280890] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:13.758 [2024-07-15 16:29:56.283036] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:13.758 request: 00:31:13.759 { 00:31:13.759 "name": "nvme_second", 00:31:13.759 "trtype": "tcp", 00:31:13.759 "traddr": "10.0.0.2", 00:31:13.759 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:13.759 "adrfam": "ipv4", 00:31:13.759 "trsvcid": "8010", 00:31:13.759 "attach_timeout_ms": 3000, 00:31:13.759 "method": "bdev_nvme_start_discovery", 00:31:13.759 "req_id": 1 00:31:13.759 } 00:31:13.759 Got JSON-RPC error response 00:31:13.759 response: 00:31:13.759 { 00:31:13.759 "code": -110, 00:31:13.759 "message": "Connection timed out" 00:31:13.759 } 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 442284 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:13.759 rmmod nvme_tcp 00:31:13.759 rmmod nvme_fabrics 00:31:13.759 rmmod nvme_keyring 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 442263 ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 442263 ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 442263' 00:31:13.759 killing process with pid 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 442263 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.759 16:29:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:16.315 00:31:16.315 real 0m14.155s 00:31:16.315 user 0m21.124s 00:31:16.315 sys 0m2.850s 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.315 ************************************ 00:31:16.315 END TEST nvmf_host_discovery 00:31:16.315 ************************************ 00:31:16.315 16:29:58 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:16.315 16:29:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:16.315 16:29:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:16.315 16:29:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.315 ************************************ 00:31:16.315 START TEST nvmf_host_multipath_status 00:31:16.315 ************************************ 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:16.315 * Looking for test storage... 00:31:16.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.315 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:16.316 16:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.690 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:17.691 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:17.691 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:17.691 Found net devices under 0000:84:00.0: cvl_0_0 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:17.691 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:17.950 Found net devices under 0000:84:00.1: cvl_0_1 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.950 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:17.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:31:17.951 00:31:17.951 --- 10.0.0.2 ping statistics --- 00:31:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.951 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:17.951 00:31:17.951 --- 10.0.0.1 ping statistics --- 00:31:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.951 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=445520 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 445520 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 445520 ']' 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:17.951 16:30:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.951 [2024-07-15 16:30:00.858001] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:17.951 [2024-07-15 16:30:00.858099] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.951 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.208 [2024-07-15 16:30:00.936342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:18.208 [2024-07-15 16:30:01.028783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.208 [2024-07-15 16:30:01.028858] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.208 [2024-07-15 16:30:01.028886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.208 [2024-07-15 16:30:01.028898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.208 [2024-07-15 16:30:01.028909] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.208 [2024-07-15 16:30:01.028959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.208 [2024-07-15 16:30:01.028965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=445520 00:31:18.208 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:18.465 [2024-07-15 16:30:01.401272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.465 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:18.722 Malloc0 00:31:18.978 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:19.235 16:30:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:19.493 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.493 [2024-07-15 16:30:02.455201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:19.751 [2024-07-15 16:30:02.703915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=445863 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 445863 /var/tmp/bdevperf.sock 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 445863 ']' 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:19.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:19.751 16:30:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.317 16:30:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:20.317 16:30:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:20.317 16:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:20.574 16:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:20.832 Nvme0n1 00:31:20.832 16:30:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:21.396 Nvme0n1 00:31:21.396 16:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:21.396 16:30:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:23.926 16:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:23.926 16:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:23.926 16:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:24.184 16:30:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:25.118 16:30:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:25.118 16:30:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:25.118 16:30:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.118 16:30:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.375 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.375 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:25.375 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.375 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.633 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.633 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.633 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.633 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.890 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.890 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.890 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.890 16:30:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.147 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.147 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.147 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.147 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.404 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.404 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:26.404 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.404 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.661 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.661 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:26.661 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:27.225 16:30:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.225 16:30:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:28.209 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:28.209 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:28.209 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.209 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.774 16:30:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.340 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.599 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.599 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.599 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.599 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.165 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.165 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:30.165 16:30:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.422 16:30:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:30.680 16:30:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:31.614 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:31.614 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:31.614 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.614 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.872 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.872 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.873 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.873 16:30:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.130 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.130 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.130 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.130 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.388 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.388 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.388 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.388 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.647 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.647 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.647 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.647 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.905 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.905 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.905 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.905 16:30:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.471 16:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.471 16:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:33.471 16:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.471 16:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:34.037 16:30:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:34.972 16:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:34.972 16:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.972 16:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.972 16:30:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.230 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.230 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:35.230 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.230 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.488 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.488 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.488 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.488 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.747 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.747 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.747 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.747 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.005 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.005 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:36.005 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.005 16:30:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.263 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.263 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:36.263 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.263 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.523 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.523 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:36.523 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:36.842 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:37.100 16:30:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:38.035 16:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:38.035 16:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:38.035 16:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.035 16:30:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.292 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.292 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:38.292 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.292 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.550 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.550 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.550 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.550 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.808 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.808 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.808 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.808 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:39.066 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.066 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:39.066 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.066 16:30:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:39.325 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.325 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:39.325 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.325 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.583 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.583 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:39.583 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:39.841 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:40.098 16:30:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:41.034 16:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:41.034 16:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:41.034 16:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.034 16:30:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:41.292 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.292 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:41.292 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.292 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.549 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.549 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.549 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.549 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.807 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.807 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.808 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.808 16:30:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:42.066 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.066 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:42.066 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.066 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:42.324 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.324 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:42.324 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.324 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.582 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.582 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:43.147 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:43.147 16:30:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:43.147 16:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.714 16:30:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:44.648 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:44.648 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:44.648 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.648 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.906 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.906 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:44.906 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.906 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.164 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.164 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.164 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.164 16:30:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.422 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.422 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.422 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.422 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.706 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.706 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.706 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.706 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.965 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.965 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:45.965 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.965 16:30:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.223 16:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.223 16:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:46.223 16:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.482 16:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:46.740 16:30:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.113 16:30:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.370 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.370 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.370 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.370 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.628 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.628 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.628 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.628 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.886 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.886 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.886 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.886 16:30:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.145 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.145 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.145 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.145 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.403 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.403 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:49.403 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.662 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:50.231 16:30:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:51.174 16:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:51.174 16:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.174 16:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.174 16:30:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.431 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.431 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:51.431 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.431 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:51.688 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.688 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:51.688 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.688 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:51.944 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.944 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:51.944 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.944 16:30:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.201 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.201 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.201 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.201 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.459 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.459 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:52.459 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.459 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:52.717 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.717 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:52.717 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:52.974 16:30:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:53.232 16:30:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.609 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.868 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.868 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.868 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.868 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.126 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.126 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.126 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.126 16:30:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.384 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.384 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.384 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.384 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.642 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.642 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:55.642 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.642 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 445863 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 445863 ']' 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 445863 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 445863 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 445863' 00:31:55.900 killing process with pid 445863 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 445863 00:31:55.900 16:30:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 445863 00:31:56.170 Connection closed with partial response: 00:31:56.170 00:31:56.170 00:31:56.170 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 445863 00:31:56.170 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.170 [2024-07-15 16:30:02.765194] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:56.170 [2024-07-15 16:30:02.765272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445863 ] 00:31:56.170 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.170 [2024-07-15 16:30:02.837913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.170 [2024-07-15 16:30:02.934216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.170 Running I/O for 90 seconds... 00:31:56.170 [2024-07-15 16:30:19.658856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.658915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.658994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.659645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.659661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.170 [2024-07-15 16:30:19.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.170 [2024-07-15 16:30:19.660707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.171 [2024-07-15 16:30:19.660722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.660969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.660991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.661974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.661990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.171 [2024-07-15 16:30:19.662434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.171 [2024-07-15 16:30:19.662476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.171 [2024-07-15 16:30:19.662575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.171 [2024-07-15 16:30:19.662596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.662885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.662930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.662957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.662978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.663643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.172 [2024-07-15 16:30:19.663686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.663956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.663972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.172 [2024-07-15 16:30:19.664493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.172 [2024-07-15 16:30:19.664509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.664960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:19.665495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:19.665511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.149309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.149367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.149409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.149448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.173 [2024-07-15 16:30:36.149958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.149980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.149997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.150025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.150080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.150097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.173 [2024-07-15 16:30:36.150119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.173 [2024-07-15 16:30:36.150136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.150962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.150979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.151843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.151963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.151985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.152002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.152024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.152041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.152079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.174 [2024-07-15 16:30:36.152096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.152118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.174 [2024-07-15 16:30:36.152135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.174 [2024-07-15 16:30:36.152161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.152178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.152216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.152238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.152254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.152277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.152293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.153626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.153674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.153712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.153778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.153818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.153857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.153935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.153962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.153980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.154451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.154698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.154715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.155527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.155565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.155602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.155644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.155683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.155743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.155787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.175 [2024-07-15 16:30:36.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.175 [2024-07-15 16:30:36.155865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.175 [2024-07-15 16:30:36.155887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.155904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.155926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.155942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.157848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.157964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.157986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.158003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.158057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.158132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.158169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.158206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.158243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.158943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.158975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.158993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.159202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.176 [2024-07-15 16:30:36.159276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.159314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.159351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.159373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.176 [2024-07-15 16:30:36.159389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.176 [2024-07-15 16:30:36.160388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.160667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.160705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.160766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.160925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.160968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.160991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.161175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.161250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.161324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.161362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.161384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.161400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.177 [2024-07-15 16:30:36.162783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.162974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.177 [2024-07-15 16:30:36.162990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.177 [2024-07-15 16:30:36.163012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.163043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.163103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.163119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.163140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.163156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.163178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.164696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.164765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.164822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.164868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.164909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.164948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.164970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.164986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.165156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.165497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.165573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.165610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.165648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.165707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.178 [2024-07-15 16:30:36.165744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.167983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.168052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.168070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.168114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.168131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.178 [2024-07-15 16:30:36.168168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.178 [2024-07-15 16:30:36.168190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.168832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.168973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.168990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.169177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.169214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.169250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.169362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.169398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.169420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.169436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.170979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.171005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.179 [2024-07-15 16:30:36.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.171335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.179 [2024-07-15 16:30:36.171371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.179 [2024-07-15 16:30:36.171392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.171884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.171962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.171984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.172056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.172265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.172358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.172374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.174928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.174954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.174982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.180 [2024-07-15 16:30:36.175551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.180 [2024-07-15 16:30:36.175591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.180 [2024-07-15 16:30:36.175613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.175628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.175665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.175767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.175884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.175922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.175961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.175983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.176000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.176263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.176320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.176336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.177447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.177483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.177519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.177845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.177867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.177883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.178556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.178598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.178635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.181 [2024-07-15 16:30:36.178672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.181 [2024-07-15 16:30:36.178934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.181 [2024-07-15 16:30:36.178957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.178995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.179563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.179585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.179600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.182 [2024-07-15 16:30:36.181884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.181982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.181998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.182038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.182 [2024-07-15 16:30:36.182055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.182 [2024-07-15 16:30:36.182076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.182580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.182616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.184529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.184586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.184623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.184659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.184956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.184978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.184994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.183 [2024-07-15 16:30:36.185482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.185539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.183 [2024-07-15 16:30:36.185554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.183 [2024-07-15 16:30:36.186164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.186768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.186830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.186846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.187945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.187970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.187998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.184 [2024-07-15 16:30:36.188772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.184 [2024-07-15 16:30:36.188889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.184 [2024-07-15 16:30:36.188911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.188927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.188949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.188965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.188987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.189003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.189057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.189109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.189217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.189257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.189279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.189295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.191912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.191938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.191983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.192920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.192982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.192998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.193021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.185 [2024-07-15 16:30:36.193051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.193073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.193088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.193109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.193125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.193145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.185 [2024-07-15 16:30:36.193161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:56.185 [2024-07-15 16:30:36.193181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.193197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.193218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.193234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.193254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.193290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.193306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.194817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.194856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.194895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.194972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.194994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.195645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.195702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.195732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.196845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.196869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.196896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.196914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.196938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.196954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.196977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.196993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.197015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.197047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.197071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.186 [2024-07-15 16:30:36.197087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.197109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.197125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.186 [2024-07-15 16:30:36.197147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.186 [2024-07-15 16:30:36.197163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.197206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.197805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.197851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.197890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.197931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.197953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.197980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.198782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.198824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.200366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.200405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.200459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.200498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.187 [2024-07-15 16:30:36.200573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.187 [2024-07-15 16:30:36.200600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.187 [2024-07-15 16:30:36.200617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.200655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.200692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.200753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.200798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.200837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.200876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.200951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.200973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.201109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.201300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.201338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.201360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.201377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.203512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.203780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:56.188 [2024-07-15 16:30:36.203932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.203970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.203991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.204007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:56.188 [2024-07-15 16:30:36.204045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.188 [2024-07-15 16:30:36.204061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:56.188 Received shutdown signal, test time was about 34.362086 seconds 00:31:56.188 00:31:56.188 Latency(us) 00:31:56.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:56.188 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:56.188 Verification LBA range: start 0x0 length 0x4000 00:31:56.188 Nvme0n1 : 34.36 8412.63 32.86 0.00 0.00 15190.63 621.99 4026531.84 00:31:56.188 =================================================================================================================== 00:31:56.188 Total : 8412.63 32.86 0.00 0.00 15190.63 621.99 4026531.84 00:31:56.188 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.447 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.447 rmmod nvme_tcp 00:31:56.447 rmmod nvme_fabrics 00:31:56.447 rmmod nvme_keyring 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 445520 ']' 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 445520 00:31:56.706 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 445520 ']' 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 445520 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 445520 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 445520' 00:31:56.707 killing process with pid 445520 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 445520 00:31:56.707 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 445520 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.998 16:30:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.927 16:30:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.927 00:31:58.927 real 0m43.008s 00:31:58.927 user 2m10.395s 00:31:58.927 sys 0m11.871s 00:31:58.927 16:30:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:58.927 16:30:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 ************************************ 00:31:58.927 END TEST nvmf_host_multipath_status 00:31:58.927 ************************************ 00:31:58.927 16:30:41 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:58.927 16:30:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:58.927 16:30:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:58.927 16:30:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 ************************************ 00:31:58.927 START TEST nvmf_discovery_remove_ifc 00:31:58.927 ************************************ 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:58.927 * Looking for test storage... 00:31:58.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:58.927 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:58.928 16:30:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:00.828 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.828 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:00.829 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:00.829 Found net devices under 0000:84:00.0: cvl_0_0 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:00.829 Found net devices under 0000:84:00.1: cvl_0_1 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.829 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:01.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:32:01.086 00:32:01.086 --- 10.0.0.2 ping statistics --- 00:32:01.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.086 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:32:01.086 00:32:01.086 --- 10.0.0.1 ping statistics --- 00:32:01.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.086 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=452816 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 452816 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 452816 ']' 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.086 16:30:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.086 [2024-07-15 16:30:43.991868] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:01.086 [2024-07-15 16:30:43.991950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.087 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.087 [2024-07-15 16:30:44.057676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.344 [2024-07-15 16:30:44.144918] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.344 [2024-07-15 16:30:44.144968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.344 [2024-07-15 16:30:44.144992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.344 [2024-07-15 16:30:44.145024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.344 [2024-07-15 16:30:44.145035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.344 [2024-07-15 16:30:44.145060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.344 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.344 [2024-07-15 16:30:44.282665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.344 [2024-07-15 16:30:44.290867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:01.344 null0 00:32:01.344 [2024-07-15 16:30:44.322818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=452844 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 452844 /tmp/host.sock 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 452844 ']' 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:01.602 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.602 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.602 [2024-07-15 16:30:44.387523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:01.602 [2024-07-15 16:30:44.387587] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452844 ] 00:32:01.602 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.602 [2024-07-15 16:30:44.449349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.602 [2024-07-15 16:30:44.540359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.861 16:30:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.798 [2024-07-15 16:30:45.750947] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:02.798 [2024-07-15 16:30:45.750976] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:02.798 [2024-07-15 16:30:45.750999] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:03.056 [2024-07-15 16:30:45.839291] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:03.056 [2024-07-15 16:30:46.024309] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:03.056 [2024-07-15 16:30:46.024379] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:03.056 [2024-07-15 16:30:46.024416] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:03.056 [2024-07-15 16:30:46.024441] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:03.056 [2024-07-15 16:30:46.024477] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.056 [2024-07-15 16:30:46.028839] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x64a670 was disconnected and freed. delete nvme_qpair. 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.056 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.314 16:30:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.250 16:30:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.626 16:30:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.579 16:30:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.521 16:30:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.459 16:30:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.719 [2024-07-15 16:30:51.465257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:08.719 [2024-07-15 16:30:51.465330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.719 [2024-07-15 16:30:51.465356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.719 [2024-07-15 16:30:51.465377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.719 [2024-07-15 16:30:51.465393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.719 [2024-07-15 16:30:51.465410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.719 [2024-07-15 16:30:51.465425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.719 [2024-07-15 16:30:51.465441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.719 [2024-07-15 16:30:51.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.719 [2024-07-15 16:30:51.465474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:08.719 [2024-07-15 16:30:51.465490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:08.719 [2024-07-15 16:30:51.465507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6117e0 is same with the state(5) to be set 00:32:08.719 [2024-07-15 16:30:51.475276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6117e0 (9): Bad file descriptor 00:32:08.719 [2024-07-15 16:30:51.485332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.658 [2024-07-15 16:30:52.487781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:09.658 [2024-07-15 16:30:52.487844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6117e0 with addr=10.0.0.2, port=4420 00:32:09.658 [2024-07-15 16:30:52.487871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6117e0 is same with the state(5) to be set 00:32:09.658 [2024-07-15 16:30:52.487922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6117e0 (9): Bad file descriptor 00:32:09.658 [2024-07-15 16:30:52.488331] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:09.658 [2024-07-15 16:30:52.488361] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:09.658 [2024-07-15 16:30:52.488377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:09.658 [2024-07-15 16:30:52.488395] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:09.658 [2024-07-15 16:30:52.488423] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:09.658 [2024-07-15 16:30:52.488442] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.658 16:30:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.593 [2024-07-15 16:30:53.490946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:10.593 [2024-07-15 16:30:53.491008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:10.593 [2024-07-15 16:30:53.491038] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:10.593 [2024-07-15 16:30:53.491058] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:10.593 [2024-07-15 16:30:53.491093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.593 [2024-07-15 16:30:53.491144] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:10.593 [2024-07-15 16:30:53.491197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.593 [2024-07-15 16:30:53.491224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.593 [2024-07-15 16:30:53.491248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.593 [2024-07-15 16:30:53.491264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.593 [2024-07-15 16:30:53.491281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.593 [2024-07-15 16:30:53.491298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.593 [2024-07-15 16:30:53.491314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.593 [2024-07-15 16:30:53.491338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.593 [2024-07-15 16:30:53.491355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:10.593 [2024-07-15 16:30:53.491371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.593 [2024-07-15 16:30:53.491389] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:10.593 [2024-07-15 16:30:53.491491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x610c70 (9): Bad file descriptor 00:32:10.593 [2024-07-15 16:30:53.492528] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:10.593 [2024-07-15 16:30:53.492553] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.593 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.851 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:10.852 16:30:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:11.782 16:30:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.712 [2024-07-15 16:30:55.546472] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:12.712 [2024-07-15 16:30:55.546498] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:12.712 [2024-07-15 16:30:55.546524] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:12.712 [2024-07-15 16:30:55.675943] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.713 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.972 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.972 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.972 16:30:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.972 [2024-07-15 16:30:55.776137] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:12.972 [2024-07-15 16:30:55.776194] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:12.972 [2024-07-15 16:30:55.776230] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:12.972 [2024-07-15 16:30:55.776258] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:12.972 [2024-07-15 16:30:55.776272] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:12.972 [2024-07-15 16:30:55.784829] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61f640 was disconnected and freed. delete nvme_qpair. 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 452844 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 452844 ']' 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 452844 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 452844 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 452844' 00:32:13.910 killing process with pid 452844 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 452844 00:32:13.910 16:30:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 452844 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:14.170 rmmod nvme_tcp 00:32:14.170 rmmod nvme_fabrics 00:32:14.170 rmmod nvme_keyring 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 452816 ']' 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 452816 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 452816 ']' 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 452816 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 452816 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 452816' 00:32:14.170 killing process with pid 452816 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 452816 00:32:14.170 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 452816 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.427 16:30:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.963 16:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:16.963 00:32:16.963 real 0m17.595s 00:32:16.963 user 0m25.558s 00:32:16.963 sys 0m3.033s 00:32:16.963 16:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:16.963 16:30:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.963 ************************************ 00:32:16.963 END TEST nvmf_discovery_remove_ifc 00:32:16.963 ************************************ 00:32:16.963 16:30:59 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:16.963 16:30:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:16.963 16:30:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:16.963 16:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:16.963 ************************************ 00:32:16.963 START TEST nvmf_identify_kernel_target 00:32:16.963 ************************************ 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:16.963 * Looking for test storage... 00:32:16.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:16.963 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:16.964 16:30:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:18.869 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:18.869 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:18.869 Found net devices under 0000:84:00.0: cvl_0_0 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:18.869 Found net devices under 0000:84:00.1: cvl_0_1 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.869 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:18.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:32:18.869 00:32:18.870 --- 10.0.0.2 ping statistics --- 00:32:18.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.870 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:32:18.870 00:32:18.870 --- 10.0.0.1 ping statistics --- 00:32:18.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.870 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:18.870 16:31:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:19.806 Waiting for block devices as requested 00:32:19.806 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:19.806 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:20.064 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:20.064 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:20.064 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:20.064 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:20.323 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:20.323 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:20.323 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:20.323 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:20.323 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:20.582 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:20.582 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:20.582 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:20.582 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:20.840 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:20.840 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:20.840 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:20.840 No valid GPT data, bailing 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:32:21.099 00:32:21.099 Discovery Log Number of Records 2, Generation counter 2 00:32:21.099 =====Discovery Log Entry 0====== 00:32:21.099 trtype: tcp 00:32:21.099 adrfam: ipv4 00:32:21.099 subtype: current discovery subsystem 00:32:21.099 treq: not specified, sq flow control disable supported 00:32:21.099 portid: 1 00:32:21.099 trsvcid: 4420 00:32:21.099 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:21.099 traddr: 10.0.0.1 00:32:21.099 eflags: none 00:32:21.099 sectype: none 00:32:21.099 =====Discovery Log Entry 1====== 00:32:21.099 trtype: tcp 00:32:21.099 adrfam: ipv4 00:32:21.099 subtype: nvme subsystem 00:32:21.099 treq: not specified, sq flow control disable supported 00:32:21.099 portid: 1 00:32:21.099 trsvcid: 4420 00:32:21.099 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:21.099 traddr: 10.0.0.1 00:32:21.099 eflags: none 00:32:21.099 sectype: none 00:32:21.099 16:31:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:21.099 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:21.099 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.099 ===================================================== 00:32:21.099 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:21.099 ===================================================== 00:32:21.099 Controller Capabilities/Features 00:32:21.099 ================================ 00:32:21.099 Vendor ID: 0000 00:32:21.099 Subsystem Vendor ID: 0000 00:32:21.099 Serial Number: bd982d45ed3eae99fa93 00:32:21.099 Model Number: Linux 00:32:21.099 Firmware Version: 6.7.0-68 00:32:21.099 Recommended Arb Burst: 0 00:32:21.099 IEEE OUI Identifier: 00 00 00 00:32:21.099 Multi-path I/O 00:32:21.099 May have multiple subsystem ports: No 00:32:21.099 May have multiple controllers: No 00:32:21.099 Associated with SR-IOV VF: No 00:32:21.099 Max Data Transfer Size: Unlimited 00:32:21.099 Max Number of Namespaces: 0 00:32:21.099 Max Number of I/O Queues: 1024 00:32:21.099 NVMe Specification Version (VS): 1.3 00:32:21.099 NVMe Specification Version (Identify): 1.3 00:32:21.099 Maximum Queue Entries: 1024 00:32:21.099 Contiguous Queues Required: No 00:32:21.099 Arbitration Mechanisms Supported 00:32:21.099 Weighted Round Robin: Not Supported 00:32:21.099 Vendor Specific: Not Supported 00:32:21.099 Reset Timeout: 7500 ms 00:32:21.099 Doorbell Stride: 4 bytes 00:32:21.099 NVM Subsystem Reset: Not Supported 00:32:21.099 Command Sets Supported 00:32:21.099 NVM Command Set: Supported 00:32:21.099 Boot Partition: Not Supported 00:32:21.099 Memory Page Size Minimum: 4096 bytes 00:32:21.099 Memory Page Size Maximum: 4096 bytes 00:32:21.099 Persistent Memory Region: Not Supported 00:32:21.099 Optional Asynchronous Events Supported 00:32:21.099 Namespace Attribute Notices: Not Supported 00:32:21.099 Firmware Activation Notices: Not Supported 00:32:21.099 ANA Change Notices: Not Supported 00:32:21.099 PLE Aggregate Log Change Notices: Not Supported 00:32:21.099 LBA Status Info Alert Notices: Not Supported 00:32:21.099 EGE Aggregate Log Change Notices: Not Supported 00:32:21.099 Normal NVM Subsystem Shutdown event: Not Supported 00:32:21.099 Zone Descriptor Change Notices: Not Supported 00:32:21.099 Discovery Log Change Notices: Supported 00:32:21.099 Controller Attributes 00:32:21.099 128-bit Host Identifier: Not Supported 00:32:21.099 Non-Operational Permissive Mode: Not Supported 00:32:21.099 NVM Sets: Not Supported 00:32:21.099 Read Recovery Levels: Not Supported 00:32:21.099 Endurance Groups: Not Supported 00:32:21.099 Predictable Latency Mode: Not Supported 00:32:21.099 Traffic Based Keep ALive: Not Supported 00:32:21.099 Namespace Granularity: Not Supported 00:32:21.099 SQ Associations: Not Supported 00:32:21.099 UUID List: Not Supported 00:32:21.099 Multi-Domain Subsystem: Not Supported 00:32:21.099 Fixed Capacity Management: Not Supported 00:32:21.099 Variable Capacity Management: Not Supported 00:32:21.099 Delete Endurance Group: Not Supported 00:32:21.099 Delete NVM Set: Not Supported 00:32:21.099 Extended LBA Formats Supported: Not Supported 00:32:21.099 Flexible Data Placement Supported: Not Supported 00:32:21.099 00:32:21.099 Controller Memory Buffer Support 00:32:21.099 ================================ 00:32:21.099 Supported: No 00:32:21.099 00:32:21.099 Persistent Memory Region Support 00:32:21.099 ================================ 00:32:21.099 Supported: No 00:32:21.099 00:32:21.099 Admin Command Set Attributes 00:32:21.099 ============================ 00:32:21.099 Security Send/Receive: Not Supported 00:32:21.099 Format NVM: Not Supported 00:32:21.099 Firmware Activate/Download: Not Supported 00:32:21.099 Namespace Management: Not Supported 00:32:21.099 Device Self-Test: Not Supported 00:32:21.099 Directives: Not Supported 00:32:21.099 NVMe-MI: Not Supported 00:32:21.099 Virtualization Management: Not Supported 00:32:21.099 Doorbell Buffer Config: Not Supported 00:32:21.099 Get LBA Status Capability: Not Supported 00:32:21.099 Command & Feature Lockdown Capability: Not Supported 00:32:21.099 Abort Command Limit: 1 00:32:21.099 Async Event Request Limit: 1 00:32:21.099 Number of Firmware Slots: N/A 00:32:21.099 Firmware Slot 1 Read-Only: N/A 00:32:21.099 Firmware Activation Without Reset: N/A 00:32:21.099 Multiple Update Detection Support: N/A 00:32:21.099 Firmware Update Granularity: No Information Provided 00:32:21.099 Per-Namespace SMART Log: No 00:32:21.099 Asymmetric Namespace Access Log Page: Not Supported 00:32:21.099 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:21.099 Command Effects Log Page: Not Supported 00:32:21.099 Get Log Page Extended Data: Supported 00:32:21.099 Telemetry Log Pages: Not Supported 00:32:21.099 Persistent Event Log Pages: Not Supported 00:32:21.099 Supported Log Pages Log Page: May Support 00:32:21.099 Commands Supported & Effects Log Page: Not Supported 00:32:21.099 Feature Identifiers & Effects Log Page:May Support 00:32:21.099 NVMe-MI Commands & Effects Log Page: May Support 00:32:21.099 Data Area 4 for Telemetry Log: Not Supported 00:32:21.099 Error Log Page Entries Supported: 1 00:32:21.099 Keep Alive: Not Supported 00:32:21.099 00:32:21.099 NVM Command Set Attributes 00:32:21.099 ========================== 00:32:21.099 Submission Queue Entry Size 00:32:21.099 Max: 1 00:32:21.099 Min: 1 00:32:21.099 Completion Queue Entry Size 00:32:21.099 Max: 1 00:32:21.099 Min: 1 00:32:21.099 Number of Namespaces: 0 00:32:21.099 Compare Command: Not Supported 00:32:21.099 Write Uncorrectable Command: Not Supported 00:32:21.099 Dataset Management Command: Not Supported 00:32:21.099 Write Zeroes Command: Not Supported 00:32:21.099 Set Features Save Field: Not Supported 00:32:21.099 Reservations: Not Supported 00:32:21.099 Timestamp: Not Supported 00:32:21.099 Copy: Not Supported 00:32:21.099 Volatile Write Cache: Not Present 00:32:21.099 Atomic Write Unit (Normal): 1 00:32:21.099 Atomic Write Unit (PFail): 1 00:32:21.099 Atomic Compare & Write Unit: 1 00:32:21.099 Fused Compare & Write: Not Supported 00:32:21.099 Scatter-Gather List 00:32:21.099 SGL Command Set: Supported 00:32:21.099 SGL Keyed: Not Supported 00:32:21.099 SGL Bit Bucket Descriptor: Not Supported 00:32:21.099 SGL Metadata Pointer: Not Supported 00:32:21.099 Oversized SGL: Not Supported 00:32:21.099 SGL Metadata Address: Not Supported 00:32:21.099 SGL Offset: Supported 00:32:21.100 Transport SGL Data Block: Not Supported 00:32:21.100 Replay Protected Memory Block: Not Supported 00:32:21.100 00:32:21.100 Firmware Slot Information 00:32:21.100 ========================= 00:32:21.100 Active slot: 0 00:32:21.100 00:32:21.100 00:32:21.100 Error Log 00:32:21.100 ========= 00:32:21.100 00:32:21.100 Active Namespaces 00:32:21.100 ================= 00:32:21.100 Discovery Log Page 00:32:21.100 ================== 00:32:21.100 Generation Counter: 2 00:32:21.100 Number of Records: 2 00:32:21.100 Record Format: 0 00:32:21.100 00:32:21.100 Discovery Log Entry 0 00:32:21.100 ---------------------- 00:32:21.100 Transport Type: 3 (TCP) 00:32:21.100 Address Family: 1 (IPv4) 00:32:21.100 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:21.100 Entry Flags: 00:32:21.100 Duplicate Returned Information: 0 00:32:21.100 Explicit Persistent Connection Support for Discovery: 0 00:32:21.100 Transport Requirements: 00:32:21.100 Secure Channel: Not Specified 00:32:21.100 Port ID: 1 (0x0001) 00:32:21.100 Controller ID: 65535 (0xffff) 00:32:21.100 Admin Max SQ Size: 32 00:32:21.100 Transport Service Identifier: 4420 00:32:21.100 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:21.100 Transport Address: 10.0.0.1 00:32:21.100 Discovery Log Entry 1 00:32:21.100 ---------------------- 00:32:21.100 Transport Type: 3 (TCP) 00:32:21.100 Address Family: 1 (IPv4) 00:32:21.100 Subsystem Type: 2 (NVM Subsystem) 00:32:21.100 Entry Flags: 00:32:21.100 Duplicate Returned Information: 0 00:32:21.100 Explicit Persistent Connection Support for Discovery: 0 00:32:21.100 Transport Requirements: 00:32:21.100 Secure Channel: Not Specified 00:32:21.100 Port ID: 1 (0x0001) 00:32:21.100 Controller ID: 65535 (0xffff) 00:32:21.100 Admin Max SQ Size: 32 00:32:21.100 Transport Service Identifier: 4420 00:32:21.100 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:21.100 Transport Address: 10.0.0.1 00:32:21.100 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.100 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.359 get_feature(0x01) failed 00:32:21.359 get_feature(0x02) failed 00:32:21.359 get_feature(0x04) failed 00:32:21.359 ===================================================== 00:32:21.359 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.359 ===================================================== 00:32:21.359 Controller Capabilities/Features 00:32:21.359 ================================ 00:32:21.359 Vendor ID: 0000 00:32:21.359 Subsystem Vendor ID: 0000 00:32:21.359 Serial Number: 2ac29157769982dbca0c 00:32:21.359 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:21.359 Firmware Version: 6.7.0-68 00:32:21.359 Recommended Arb Burst: 6 00:32:21.359 IEEE OUI Identifier: 00 00 00 00:32:21.359 Multi-path I/O 00:32:21.360 May have multiple subsystem ports: Yes 00:32:21.360 May have multiple controllers: Yes 00:32:21.360 Associated with SR-IOV VF: No 00:32:21.360 Max Data Transfer Size: Unlimited 00:32:21.360 Max Number of Namespaces: 1024 00:32:21.360 Max Number of I/O Queues: 128 00:32:21.360 NVMe Specification Version (VS): 1.3 00:32:21.360 NVMe Specification Version (Identify): 1.3 00:32:21.360 Maximum Queue Entries: 1024 00:32:21.360 Contiguous Queues Required: No 00:32:21.360 Arbitration Mechanisms Supported 00:32:21.360 Weighted Round Robin: Not Supported 00:32:21.360 Vendor Specific: Not Supported 00:32:21.360 Reset Timeout: 7500 ms 00:32:21.360 Doorbell Stride: 4 bytes 00:32:21.360 NVM Subsystem Reset: Not Supported 00:32:21.360 Command Sets Supported 00:32:21.360 NVM Command Set: Supported 00:32:21.360 Boot Partition: Not Supported 00:32:21.360 Memory Page Size Minimum: 4096 bytes 00:32:21.360 Memory Page Size Maximum: 4096 bytes 00:32:21.360 Persistent Memory Region: Not Supported 00:32:21.360 Optional Asynchronous Events Supported 00:32:21.360 Namespace Attribute Notices: Supported 00:32:21.360 Firmware Activation Notices: Not Supported 00:32:21.360 ANA Change Notices: Supported 00:32:21.360 PLE Aggregate Log Change Notices: Not Supported 00:32:21.360 LBA Status Info Alert Notices: Not Supported 00:32:21.360 EGE Aggregate Log Change Notices: Not Supported 00:32:21.360 Normal NVM Subsystem Shutdown event: Not Supported 00:32:21.360 Zone Descriptor Change Notices: Not Supported 00:32:21.360 Discovery Log Change Notices: Not Supported 00:32:21.360 Controller Attributes 00:32:21.360 128-bit Host Identifier: Supported 00:32:21.360 Non-Operational Permissive Mode: Not Supported 00:32:21.360 NVM Sets: Not Supported 00:32:21.360 Read Recovery Levels: Not Supported 00:32:21.360 Endurance Groups: Not Supported 00:32:21.360 Predictable Latency Mode: Not Supported 00:32:21.360 Traffic Based Keep ALive: Supported 00:32:21.360 Namespace Granularity: Not Supported 00:32:21.360 SQ Associations: Not Supported 00:32:21.360 UUID List: Not Supported 00:32:21.360 Multi-Domain Subsystem: Not Supported 00:32:21.360 Fixed Capacity Management: Not Supported 00:32:21.360 Variable Capacity Management: Not Supported 00:32:21.360 Delete Endurance Group: Not Supported 00:32:21.360 Delete NVM Set: Not Supported 00:32:21.360 Extended LBA Formats Supported: Not Supported 00:32:21.360 Flexible Data Placement Supported: Not Supported 00:32:21.360 00:32:21.360 Controller Memory Buffer Support 00:32:21.360 ================================ 00:32:21.360 Supported: No 00:32:21.360 00:32:21.360 Persistent Memory Region Support 00:32:21.360 ================================ 00:32:21.360 Supported: No 00:32:21.360 00:32:21.360 Admin Command Set Attributes 00:32:21.360 ============================ 00:32:21.360 Security Send/Receive: Not Supported 00:32:21.360 Format NVM: Not Supported 00:32:21.360 Firmware Activate/Download: Not Supported 00:32:21.360 Namespace Management: Not Supported 00:32:21.360 Device Self-Test: Not Supported 00:32:21.360 Directives: Not Supported 00:32:21.360 NVMe-MI: Not Supported 00:32:21.360 Virtualization Management: Not Supported 00:32:21.360 Doorbell Buffer Config: Not Supported 00:32:21.360 Get LBA Status Capability: Not Supported 00:32:21.360 Command & Feature Lockdown Capability: Not Supported 00:32:21.360 Abort Command Limit: 4 00:32:21.360 Async Event Request Limit: 4 00:32:21.360 Number of Firmware Slots: N/A 00:32:21.360 Firmware Slot 1 Read-Only: N/A 00:32:21.360 Firmware Activation Without Reset: N/A 00:32:21.360 Multiple Update Detection Support: N/A 00:32:21.360 Firmware Update Granularity: No Information Provided 00:32:21.360 Per-Namespace SMART Log: Yes 00:32:21.360 Asymmetric Namespace Access Log Page: Supported 00:32:21.360 ANA Transition Time : 10 sec 00:32:21.360 00:32:21.360 Asymmetric Namespace Access Capabilities 00:32:21.360 ANA Optimized State : Supported 00:32:21.360 ANA Non-Optimized State : Supported 00:32:21.360 ANA Inaccessible State : Supported 00:32:21.360 ANA Persistent Loss State : Supported 00:32:21.360 ANA Change State : Supported 00:32:21.360 ANAGRPID is not changed : No 00:32:21.360 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:21.360 00:32:21.360 ANA Group Identifier Maximum : 128 00:32:21.360 Number of ANA Group Identifiers : 128 00:32:21.360 Max Number of Allowed Namespaces : 1024 00:32:21.360 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:21.360 Command Effects Log Page: Supported 00:32:21.360 Get Log Page Extended Data: Supported 00:32:21.360 Telemetry Log Pages: Not Supported 00:32:21.360 Persistent Event Log Pages: Not Supported 00:32:21.360 Supported Log Pages Log Page: May Support 00:32:21.360 Commands Supported & Effects Log Page: Not Supported 00:32:21.360 Feature Identifiers & Effects Log Page:May Support 00:32:21.360 NVMe-MI Commands & Effects Log Page: May Support 00:32:21.360 Data Area 4 for Telemetry Log: Not Supported 00:32:21.360 Error Log Page Entries Supported: 128 00:32:21.360 Keep Alive: Supported 00:32:21.360 Keep Alive Granularity: 1000 ms 00:32:21.360 00:32:21.360 NVM Command Set Attributes 00:32:21.360 ========================== 00:32:21.360 Submission Queue Entry Size 00:32:21.360 Max: 64 00:32:21.360 Min: 64 00:32:21.360 Completion Queue Entry Size 00:32:21.360 Max: 16 00:32:21.360 Min: 16 00:32:21.360 Number of Namespaces: 1024 00:32:21.360 Compare Command: Not Supported 00:32:21.360 Write Uncorrectable Command: Not Supported 00:32:21.360 Dataset Management Command: Supported 00:32:21.360 Write Zeroes Command: Supported 00:32:21.360 Set Features Save Field: Not Supported 00:32:21.360 Reservations: Not Supported 00:32:21.360 Timestamp: Not Supported 00:32:21.360 Copy: Not Supported 00:32:21.360 Volatile Write Cache: Present 00:32:21.360 Atomic Write Unit (Normal): 1 00:32:21.360 Atomic Write Unit (PFail): 1 00:32:21.360 Atomic Compare & Write Unit: 1 00:32:21.360 Fused Compare & Write: Not Supported 00:32:21.360 Scatter-Gather List 00:32:21.360 SGL Command Set: Supported 00:32:21.360 SGL Keyed: Not Supported 00:32:21.360 SGL Bit Bucket Descriptor: Not Supported 00:32:21.360 SGL Metadata Pointer: Not Supported 00:32:21.360 Oversized SGL: Not Supported 00:32:21.360 SGL Metadata Address: Not Supported 00:32:21.360 SGL Offset: Supported 00:32:21.360 Transport SGL Data Block: Not Supported 00:32:21.360 Replay Protected Memory Block: Not Supported 00:32:21.360 00:32:21.360 Firmware Slot Information 00:32:21.360 ========================= 00:32:21.360 Active slot: 0 00:32:21.360 00:32:21.360 Asymmetric Namespace Access 00:32:21.360 =========================== 00:32:21.360 Change Count : 0 00:32:21.360 Number of ANA Group Descriptors : 1 00:32:21.360 ANA Group Descriptor : 0 00:32:21.360 ANA Group ID : 1 00:32:21.360 Number of NSID Values : 1 00:32:21.360 Change Count : 0 00:32:21.360 ANA State : 1 00:32:21.360 Namespace Identifier : 1 00:32:21.360 00:32:21.360 Commands Supported and Effects 00:32:21.360 ============================== 00:32:21.360 Admin Commands 00:32:21.360 -------------- 00:32:21.360 Get Log Page (02h): Supported 00:32:21.360 Identify (06h): Supported 00:32:21.360 Abort (08h): Supported 00:32:21.360 Set Features (09h): Supported 00:32:21.360 Get Features (0Ah): Supported 00:32:21.360 Asynchronous Event Request (0Ch): Supported 00:32:21.360 Keep Alive (18h): Supported 00:32:21.360 I/O Commands 00:32:21.360 ------------ 00:32:21.360 Flush (00h): Supported 00:32:21.360 Write (01h): Supported LBA-Change 00:32:21.360 Read (02h): Supported 00:32:21.360 Write Zeroes (08h): Supported LBA-Change 00:32:21.360 Dataset Management (09h): Supported 00:32:21.360 00:32:21.360 Error Log 00:32:21.360 ========= 00:32:21.360 Entry: 0 00:32:21.360 Error Count: 0x3 00:32:21.360 Submission Queue Id: 0x0 00:32:21.360 Command Id: 0x5 00:32:21.360 Phase Bit: 0 00:32:21.360 Status Code: 0x2 00:32:21.360 Status Code Type: 0x0 00:32:21.360 Do Not Retry: 1 00:32:21.360 Error Location: 0x28 00:32:21.360 LBA: 0x0 00:32:21.360 Namespace: 0x0 00:32:21.360 Vendor Log Page: 0x0 00:32:21.360 ----------- 00:32:21.360 Entry: 1 00:32:21.360 Error Count: 0x2 00:32:21.360 Submission Queue Id: 0x0 00:32:21.360 Command Id: 0x5 00:32:21.360 Phase Bit: 0 00:32:21.360 Status Code: 0x2 00:32:21.360 Status Code Type: 0x0 00:32:21.360 Do Not Retry: 1 00:32:21.360 Error Location: 0x28 00:32:21.360 LBA: 0x0 00:32:21.360 Namespace: 0x0 00:32:21.360 Vendor Log Page: 0x0 00:32:21.360 ----------- 00:32:21.360 Entry: 2 00:32:21.360 Error Count: 0x1 00:32:21.360 Submission Queue Id: 0x0 00:32:21.360 Command Id: 0x4 00:32:21.360 Phase Bit: 0 00:32:21.360 Status Code: 0x2 00:32:21.360 Status Code Type: 0x0 00:32:21.360 Do Not Retry: 1 00:32:21.360 Error Location: 0x28 00:32:21.360 LBA: 0x0 00:32:21.360 Namespace: 0x0 00:32:21.360 Vendor Log Page: 0x0 00:32:21.361 00:32:21.361 Number of Queues 00:32:21.361 ================ 00:32:21.361 Number of I/O Submission Queues: 128 00:32:21.361 Number of I/O Completion Queues: 128 00:32:21.361 00:32:21.361 ZNS Specific Controller Data 00:32:21.361 ============================ 00:32:21.361 Zone Append Size Limit: 0 00:32:21.361 00:32:21.361 00:32:21.361 Active Namespaces 00:32:21.361 ================= 00:32:21.361 get_feature(0x05) failed 00:32:21.361 Namespace ID:1 00:32:21.361 Command Set Identifier: NVM (00h) 00:32:21.361 Deallocate: Supported 00:32:21.361 Deallocated/Unwritten Error: Not Supported 00:32:21.361 Deallocated Read Value: Unknown 00:32:21.361 Deallocate in Write Zeroes: Not Supported 00:32:21.361 Deallocated Guard Field: 0xFFFF 00:32:21.361 Flush: Supported 00:32:21.361 Reservation: Not Supported 00:32:21.361 Namespace Sharing Capabilities: Multiple Controllers 00:32:21.361 Size (in LBAs): 1953525168 (931GiB) 00:32:21.361 Capacity (in LBAs): 1953525168 (931GiB) 00:32:21.361 Utilization (in LBAs): 1953525168 (931GiB) 00:32:21.361 UUID: f2c642da-0b8c-4bfc-8388-39888036bcd3 00:32:21.361 Thin Provisioning: Not Supported 00:32:21.361 Per-NS Atomic Units: Yes 00:32:21.361 Atomic Boundary Size (Normal): 0 00:32:21.361 Atomic Boundary Size (PFail): 0 00:32:21.361 Atomic Boundary Offset: 0 00:32:21.361 NGUID/EUI64 Never Reused: No 00:32:21.361 ANA group ID: 1 00:32:21.361 Namespace Write Protected: No 00:32:21.361 Number of LBA Formats: 1 00:32:21.361 Current LBA Format: LBA Format #00 00:32:21.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:21.361 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.361 rmmod nvme_tcp 00:32:21.361 rmmod nvme_fabrics 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.361 16:31:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:23.295 16:31:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:24.670 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:24.670 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:24.670 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:25.605 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:25.605 00:32:25.605 real 0m9.136s 00:32:25.605 user 0m1.899s 00:32:25.605 sys 0m3.280s 00:32:25.605 16:31:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:25.605 16:31:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.605 ************************************ 00:32:25.605 END TEST nvmf_identify_kernel_target 00:32:25.605 ************************************ 00:32:25.864 16:31:08 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:25.864 16:31:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:25.864 16:31:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:25.864 16:31:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:25.864 ************************************ 00:32:25.864 START TEST nvmf_auth_host 00:32:25.864 ************************************ 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:25.864 * Looking for test storage... 00:32:25.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:25.864 16:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:27.763 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:27.763 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:27.763 Found net devices under 0000:84:00.0: cvl_0_0 00:32:27.763 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:27.764 Found net devices under 0000:84:00.1: cvl_0_1 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:27.764 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:28.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:32:28.023 00:32:28.023 --- 10.0.0.2 ping statistics --- 00:32:28.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.023 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:32:28.023 00:32:28.023 --- 10.0.0.1 ping statistics --- 00:32:28.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.023 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=459904 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 459904 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 459904 ']' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:28.023 16:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.281 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c0782d31f90d0e03f23427a7332eec3 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.n4K 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c0782d31f90d0e03f23427a7332eec3 0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c0782d31f90d0e03f23427a7332eec3 0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c0782d31f90d0e03f23427a7332eec3 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.n4K 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.n4K 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.n4K 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1293e3ce4b0c17d5feb8d70b92f5d645c28f994cc9be3d334aac1f58064b1a57 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wLQ 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1293e3ce4b0c17d5feb8d70b92f5d645c28f994cc9be3d334aac1f58064b1a57 3 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1293e3ce4b0c17d5feb8d70b92f5d645c28f994cc9be3d334aac1f58064b1a57 3 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1293e3ce4b0c17d5feb8d70b92f5d645c28f994cc9be3d334aac1f58064b1a57 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wLQ 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wLQ 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wLQ 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af61ed66d95b123c95b5a6640986e48b91b74204d80a0c09 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3iX 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af61ed66d95b123c95b5a6640986e48b91b74204d80a0c09 0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af61ed66d95b123c95b5a6640986e48b91b74204d80a0c09 0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af61ed66d95b123c95b5a6640986e48b91b74204d80a0c09 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.282 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3iX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3iX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3iX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4a1c4e58a55f4ef125840756bd3c8a82c1c3c7ffe4164ee 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3Ek 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4a1c4e58a55f4ef125840756bd3c8a82c1c3c7ffe4164ee 2 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4a1c4e58a55f4ef125840756bd3c8a82c1c3c7ffe4164ee 2 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4a1c4e58a55f4ef125840756bd3c8a82c1c3c7ffe4164ee 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3Ek 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3Ek 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3Ek 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=731551de2fab0a6fb053413d9f9cc213 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Wg0 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 731551de2fab0a6fb053413d9f9cc213 1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 731551de2fab0a6fb053413d9f9cc213 1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=731551de2fab0a6fb053413d9f9cc213 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Wg0 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Wg0 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Wg0 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc1e79f9db744256615fe8a897bf4d91 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hE1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc1e79f9db744256615fe8a897bf4d91 1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc1e79f9db744256615fe8a897bf4d91 1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc1e79f9db744256615fe8a897bf4d91 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hE1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hE1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hE1 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.540 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6b60367176b3c9aa0232aadd56449fc3e3f923a1a3022f6 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4B5 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6b60367176b3c9aa0232aadd56449fc3e3f923a1a3022f6 2 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6b60367176b3c9aa0232aadd56449fc3e3f923a1a3022f6 2 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6b60367176b3c9aa0232aadd56449fc3e3f923a1a3022f6 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4B5 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4B5 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4B5 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e78d659b1256286e96f2c252705e0d98 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OQh 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e78d659b1256286e96f2c252705e0d98 0 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e78d659b1256286e96f2c252705e0d98 0 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e78d659b1256286e96f2c252705e0d98 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:28.541 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OQh 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OQh 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OQh 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5df8e8b6da7764e72082b2cb5e1b10f08993e6f46e277ce73bc6fe8bdbeedaf 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.UYt 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5df8e8b6da7764e72082b2cb5e1b10f08993e6f46e277ce73bc6fe8bdbeedaf 3 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5df8e8b6da7764e72082b2cb5e1b10f08993e6f46e277ce73bc6fe8bdbeedaf 3 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5df8e8b6da7764e72082b2cb5e1b10f08993e6f46e277ce73bc6fe8bdbeedaf 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.UYt 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.UYt 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.UYt 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 459904 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 459904 ']' 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:28.799 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n4K 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wLQ ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wLQ 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3iX 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3Ek ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ek 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wg0 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.057 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hE1 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hE1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4B5 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OQh ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OQh 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.UYt 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:29.058 16:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:29.992 Waiting for block devices as requested 00:32:30.251 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:30.251 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:30.509 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:30.509 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:30.509 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:30.767 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:30.767 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:30.767 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:30.767 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:30.767 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:31.024 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:31.024 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.024 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:31.024 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.281 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.282 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.282 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:31.847 No valid GPT data, bailing 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:31.847 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:32:31.847 00:32:31.847 Discovery Log Number of Records 2, Generation counter 2 00:32:31.847 =====Discovery Log Entry 0====== 00:32:31.847 trtype: tcp 00:32:31.847 adrfam: ipv4 00:32:31.847 subtype: current discovery subsystem 00:32:31.847 treq: not specified, sq flow control disable supported 00:32:31.847 portid: 1 00:32:31.847 trsvcid: 4420 00:32:31.847 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:31.847 traddr: 10.0.0.1 00:32:31.847 eflags: none 00:32:31.848 sectype: none 00:32:31.848 =====Discovery Log Entry 1====== 00:32:31.848 trtype: tcp 00:32:31.848 adrfam: ipv4 00:32:31.848 subtype: nvme subsystem 00:32:31.848 treq: not specified, sq flow control disable supported 00:32:31.848 portid: 1 00:32:31.848 trsvcid: 4420 00:32:31.848 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:31.848 traddr: 10.0.0.1 00:32:31.848 eflags: none 00:32:31.848 sectype: none 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.848 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.106 nvme0n1 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.106 16:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.365 nvme0n1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.365 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 nvme0n1 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.624 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.625 nvme0n1 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.625 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.883 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.884 nvme0n1 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.884 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 nvme0n1 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.143 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.402 nvme0n1 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.402 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.661 nvme0n1 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.661 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.920 nvme0n1 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.920 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.178 16:31:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.437 nvme0n1 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.437 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.702 nvme0n1 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.702 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 nvme0n1 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.959 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.960 16:31:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.216 nvme0n1 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.216 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.217 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.474 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.731 nvme0n1 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.731 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.732 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.990 nvme0n1 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.990 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.247 16:31:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.555 nvme0n1 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.555 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 nvme0n1 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.120 16:31:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.120 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.685 nvme0n1 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.685 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.942 16:31:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 nvme0n1 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.505 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.506 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.071 nvme0n1 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.071 16:31:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.071 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.071 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.072 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.637 nvme0n1 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.637 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.638 16:31:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.009 nvme0n1 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.009 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.010 16:31:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.943 nvme0n1 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.943 16:31:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.944 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.944 16:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.875 nvme0n1 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.875 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.133 16:31:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.087 nvme0n1 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.087 16:31:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.051 nvme0n1 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:45.051 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.052 16:31:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.309 nvme0n1 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.309 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.310 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.567 nvme0n1 00:32:45.567 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.567 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.567 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.568 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.825 nvme0n1 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.825 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 nvme0n1 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.083 16:31:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 nvme0n1 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.083 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.341 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.342 nvme0n1 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.342 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.600 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.859 nvme0n1 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.859 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 nvme0n1 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.118 16:31:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.376 nvme0n1 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.376 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.634 nvme0n1 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.634 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.635 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.635 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.893 nvme0n1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.893 16:31:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.458 nvme0n1 00:32:48.458 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.458 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.459 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.716 nvme0n1 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.716 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.717 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.974 nvme0n1 00:32:48.974 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 16:31:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.232 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.491 nvme0n1 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.491 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.057 nvme0n1 00:32:50.057 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.057 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.057 16:31:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.057 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.057 16:31:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.057 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.057 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.057 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.057 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.057 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.316 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.883 nvme0n1 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.883 16:31:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.449 nvme0n1 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.449 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.450 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 nvme0n1 00:32:52.016 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.274 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.274 16:31:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.274 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.274 16:31:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.274 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.275 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.842 nvme0n1 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.842 16:31:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.226 nvme0n1 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.226 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.227 16:31:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.170 nvme0n1 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.170 16:31:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 nvme0n1 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.101 16:31:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.041 nvme0n1 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.041 16:31:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.041 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.041 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.041 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.041 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.298 16:31:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 nvme0n1 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.229 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.230 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.230 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.230 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 nvme0n1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.488 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.747 nvme0n1 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.747 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.006 nvme0n1 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.006 16:31:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.265 nvme0n1 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.265 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.523 nvme0n1 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.523 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.524 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.782 nvme0n1 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:59.782 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.783 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.041 nvme0n1 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.041 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.042 16:31:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 nvme0n1 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.300 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.558 nvme0n1 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.558 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.817 nvme0n1 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:00.817 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.818 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.076 nvme0n1 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.077 16:31:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.077 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.642 nvme0n1 00:33:01.642 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.642 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.642 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.642 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.643 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.901 nvme0n1 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.901 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.902 16:31:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.468 nvme0n1 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.468 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.469 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.727 nvme0n1 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.727 16:31:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.293 nvme0n1 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.293 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.551 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.552 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.117 nvme0n1 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.117 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.118 16:31:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.683 nvme0n1 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.683 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.684 16:31:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.270 nvme0n1 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.270 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.271 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.271 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.271 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 nvme0n1 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMwNzgyZDMxZjkwZDBlMDNmMjM0MjdhNzMzMmVlYzNBNv8B: 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTI5M2UzY2U0YjBjMTdkNWZlYjhkNzBiOTJmNWQ2NDVjMjhmOTk0Y2M5YmUzZDMzNGFhYzFmNTgwNjRiMWE1N0RE72U=: 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.873 16:31:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.131 16:31:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.131 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.131 16:31:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.065 nvme0n1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.065 16:31:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.437 nvme0n1 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.437 16:31:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzMxNTUxZGUyZmFiMGE2ZmIwNTM0MTNkOWY5Y2MyMTObeh/L: 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: ]] 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGMxZTc5ZjlkYjc0NDI1NjYxNWZlOGE4OTdiZjRkOTEgK99O: 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.437 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.438 16:31:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.372 nvme0n1 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZiNjAzNjcxNzZiM2M5YWEwMjMyYWFkZDU2NDQ5ZmMzZTNmOTIzYTFhMzAyMmY2/A+PpQ==: 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTc4ZDY1OWIxMjU2Mjg2ZTk2ZjJjMjUyNzA1ZTBkOTh3sekT: 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.372 16:31:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 nvme0n1 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjVkZjhlOGI2ZGE3NzY0ZTcyMDgyYjJjYjVlMWIxMGYwODk5M2U2ZjQ2ZTI3N2NlNzNiYzZmZThiZGJlZWRhZkcRGFk=: 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.308 16:31:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.240 nvme0n1 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.241 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY2MWVkNjZkOTViMTIzYzk1YjVhNjY0MDk4NmU0OGI5MWI3NDIwNGQ4MGEwYzA5W34q6w==: 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjRhMWM0ZTU4YTU1ZjRlZjEyNTg0MDc1NmJkM2M4YTgyYzFjM2M3ZmZlNDE2NGVlxcDuoA==: 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 request: 00:33:11.500 { 00:33:11.500 "name": "nvme0", 00:33:11.500 "trtype": "tcp", 00:33:11.500 "traddr": "10.0.0.1", 00:33:11.500 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.500 "adrfam": "ipv4", 00:33:11.500 "trsvcid": "4420", 00:33:11.500 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.500 "method": "bdev_nvme_attach_controller", 00:33:11.500 "req_id": 1 00:33:11.500 } 00:33:11.500 Got JSON-RPC error response 00:33:11.500 response: 00:33:11.500 { 00:33:11.500 "code": -5, 00:33:11.500 "message": "Input/output error" 00:33:11.500 } 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 request: 00:33:11.500 { 00:33:11.500 "name": "nvme0", 00:33:11.500 "trtype": "tcp", 00:33:11.500 "traddr": "10.0.0.1", 00:33:11.500 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.500 "adrfam": "ipv4", 00:33:11.500 "trsvcid": "4420", 00:33:11.500 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.500 "dhchap_key": "key2", 00:33:11.500 "method": "bdev_nvme_attach_controller", 00:33:11.500 "req_id": 1 00:33:11.500 } 00:33:11.500 Got JSON-RPC error response 00:33:11.500 response: 00:33:11.500 { 00:33:11.500 "code": -5, 00:33:11.500 "message": "Input/output error" 00:33:11.500 } 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.500 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.759 request: 00:33:11.759 { 00:33:11.759 "name": "nvme0", 00:33:11.759 "trtype": "tcp", 00:33:11.759 "traddr": "10.0.0.1", 00:33:11.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:11.759 "adrfam": "ipv4", 00:33:11.759 "trsvcid": "4420", 00:33:11.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:11.759 "dhchap_key": "key1", 00:33:11.759 "dhchap_ctrlr_key": "ckey2", 00:33:11.759 "method": "bdev_nvme_attach_controller", 00:33:11.759 "req_id": 1 00:33:11.759 } 00:33:11.759 Got JSON-RPC error response 00:33:11.759 response: 00:33:11.759 { 00:33:11.759 "code": -5, 00:33:11.759 "message": "Input/output error" 00:33:11.759 } 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:11.759 rmmod nvme_tcp 00:33:11.759 rmmod nvme_fabrics 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 459904 ']' 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 459904 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 459904 ']' 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 459904 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 459904 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 459904' 00:33:11.759 killing process with pid 459904 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 459904 00:33:11.759 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 459904 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:12.017 16:31:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:13.919 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:14.177 16:31:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:15.551 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:15.551 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:15.551 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:16.484 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:33:16.484 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.n4K /tmp/spdk.key-null.3iX /tmp/spdk.key-sha256.Wg0 /tmp/spdk.key-sha384.4B5 /tmp/spdk.key-sha512.UYt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:16.484 16:31:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.858 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:17.858 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:17.858 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:17.858 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:17.858 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:17.858 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:17.858 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:17.858 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:17.858 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:17.858 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:17.858 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:17.858 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:17.858 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:17.858 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:17.858 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:17.858 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:17.858 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:17.858 00:33:17.858 real 0m51.997s 00:33:17.858 user 0m49.484s 00:33:17.858 sys 0m5.910s 00:33:17.858 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:17.858 16:32:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.858 ************************************ 00:33:17.858 END TEST nvmf_auth_host 00:33:17.858 ************************************ 00:33:17.858 16:32:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:17.858 16:32:00 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.858 16:32:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:17.858 16:32:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:17.858 16:32:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.858 ************************************ 00:33:17.858 START TEST nvmf_digest 00:33:17.858 ************************************ 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:17.858 * Looking for test storage... 00:33:17.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:17.858 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:17.859 16:32:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:19.753 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.753 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:19.754 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:19.754 Found net devices under 0000:84:00.0: cvl_0_0 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:19.754 Found net devices under 0000:84:00.1: cvl_0_1 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:19.754 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:20.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:33:20.012 00:33:20.012 --- 10.0.0.2 ping statistics --- 00:33:20.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.012 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:33:20.012 00:33:20.012 --- 10.0.0.1 ping statistics --- 00:33:20.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.012 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:20.012 ************************************ 00:33:20.012 START TEST nvmf_digest_clean 00:33:20.012 ************************************ 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=469692 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 469692 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 469692 ']' 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.012 16:32:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.012 [2024-07-15 16:32:02.949860] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:20.012 [2024-07-15 16:32:02.949947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.012 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.269 [2024-07-15 16:32:03.014864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.269 [2024-07-15 16:32:03.098285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.269 [2024-07-15 16:32:03.098351] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.269 [2024-07-15 16:32:03.098377] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.269 [2024-07-15 16:32:03.098388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.269 [2024-07-15 16:32:03.098398] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.269 [2024-07-15 16:32:03.098424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.269 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.526 null0 00:33:20.526 [2024-07-15 16:32:03.281116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.526 [2024-07-15 16:32:03.305334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=469717 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 469717 /var/tmp/bperf.sock 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 469717 ']' 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.526 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.526 [2024-07-15 16:32:03.353979] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:20.526 [2024-07-15 16:32:03.354059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469717 ] 00:33:20.526 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.526 [2024-07-15 16:32:03.414105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.526 [2024-07-15 16:32:03.500646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.782 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.783 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.783 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:20.783 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:20.783 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:21.040 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.040 16:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.604 nvme0n1 00:33:21.604 16:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:21.604 16:32:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.604 Running I/O for 2 seconds... 00:33:24.129 00:33:24.129 Latency(us) 00:33:24.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.129 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:24.129 nvme0n1 : 2.00 18784.54 73.38 0.00 0.00 6803.56 3762.25 15534.46 00:33:24.129 =================================================================================================================== 00:33:24.129 Total : 18784.54 73.38 0.00 0.00 6803.56 3762.25 15534.46 00:33:24.129 0 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:24.129 | select(.opcode=="crc32c") 00:33:24.129 | "\(.module_name) \(.executed)"' 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:24.129 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 469717 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 469717 ']' 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 469717 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 469717 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 469717' 00:33:24.130 killing process with pid 469717 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 469717 00:33:24.130 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.130 00:33:24.130 Latency(us) 00:33:24.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.130 =================================================================================================================== 00:33:24.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.130 16:32:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 469717 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=470123 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 470123 /var/tmp/bperf.sock 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 470123 ']' 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.130 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:24.130 [2024-07-15 16:32:07.047527] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:24.130 [2024-07-15 16:32:07.047619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470123 ] 00:33:24.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.130 Zero copy mechanism will not be used. 00:33:24.130 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.388 [2024-07-15 16:32:07.109607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.388 [2024-07-15 16:32:07.201832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.388 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:24.388 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:24.388 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:24.388 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:24.388 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:24.646 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.646 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.212 nvme0n1 00:33:25.212 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:25.212 16:32:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.212 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.212 Zero copy mechanism will not be used. 00:33:25.212 Running I/O for 2 seconds... 00:33:27.112 00:33:27.112 Latency(us) 00:33:27.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.112 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:27.112 nvme0n1 : 2.00 3707.35 463.42 0.00 0.00 4311.29 755.48 6893.42 00:33:27.112 =================================================================================================================== 00:33:27.112 Total : 3707.35 463.42 0.00 0.00 4311.29 755.48 6893.42 00:33:27.112 0 00:33:27.112 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:27.112 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:27.112 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:27.112 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:27.112 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:27.112 | select(.opcode=="crc32c") 00:33:27.112 | "\(.module_name) \(.executed)"' 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 470123 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 470123 ']' 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 470123 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.370 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 470123 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 470123' 00:33:27.628 killing process with pid 470123 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 470123 00:33:27.628 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.628 00:33:27.628 Latency(us) 00:33:27.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.628 =================================================================================================================== 00:33:27.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 470123 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=470533 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 470533 /var/tmp/bperf.sock 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 470533 ']' 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:27.628 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:27.887 [2024-07-15 16:32:10.622816] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:27.887 [2024-07-15 16:32:10.622896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470533 ] 00:33:27.887 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.887 [2024-07-15 16:32:10.686832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.887 [2024-07-15 16:32:10.775691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.887 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.887 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:27.887 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:27.887 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:27.887 16:32:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:28.450 16:32:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.450 16:32:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.709 nvme0n1 00:33:28.709 16:32:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:28.709 16:32:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:28.709 Running I/O for 2 seconds... 00:33:30.664 00:33:30.664 Latency(us) 00:33:30.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.664 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.664 nvme0n1 : 2.01 20540.81 80.24 0.00 0.00 6221.03 2985.53 10048.85 00:33:30.664 =================================================================================================================== 00:33:30.664 Total : 20540.81 80.24 0.00 0.00 6221.03 2985.53 10048.85 00:33:30.664 0 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:30.920 | select(.opcode=="crc32c") 00:33:30.920 | "\(.module_name) \(.executed)"' 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 470533 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 470533 ']' 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 470533 00:33:30.920 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 470533 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 470533' 00:33:31.179 killing process with pid 470533 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 470533 00:33:31.179 Received shutdown signal, test time was about 2.000000 seconds 00:33:31.179 00:33:31.179 Latency(us) 00:33:31.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.179 =================================================================================================================== 00:33:31.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.179 16:32:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 470533 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=471062 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 471062 /var/tmp/bperf.sock 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 471062 ']' 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:31.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:31.179 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:31.438 [2024-07-15 16:32:14.184078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:31.438 [2024-07-15 16:32:14.184167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471062 ] 00:33:31.438 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.438 Zero copy mechanism will not be used. 00:33:31.438 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.438 [2024-07-15 16:32:14.243036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.438 [2024-07-15 16:32:14.328876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.438 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:31.438 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:31.438 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:31.438 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:31.438 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:32.004 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.004 16:32:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.262 nvme0n1 00:33:32.262 16:32:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:32.262 16:32:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:32.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.262 Zero copy mechanism will not be used. 00:33:32.262 Running I/O for 2 seconds... 00:33:34.792 00:33:34.792 Latency(us) 00:33:34.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.792 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:34.792 nvme0n1 : 2.00 4374.27 546.78 0.00 0.00 3649.50 2548.62 8204.14 00:33:34.792 =================================================================================================================== 00:33:34.792 Total : 4374.27 546.78 0.00 0.00 3649.50 2548.62 8204.14 00:33:34.792 0 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:34.792 | select(.opcode=="crc32c") 00:33:34.792 | "\(.module_name) \(.executed)"' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 471062 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 471062 ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 471062 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471062 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471062' 00:33:34.792 killing process with pid 471062 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 471062 00:33:34.792 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.792 00:33:34.792 Latency(us) 00:33:34.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.792 =================================================================================================================== 00:33:34.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 471062 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 469692 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 469692 ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 469692 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 469692 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 469692' 00:33:34.792 killing process with pid 469692 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 469692 00:33:34.792 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 469692 00:33:35.052 00:33:35.052 real 0m15.070s 00:33:35.052 user 0m29.403s 00:33:35.052 sys 0m4.698s 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:35.052 ************************************ 00:33:35.052 END TEST nvmf_digest_clean 00:33:35.052 ************************************ 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:35.052 16:32:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:35.052 ************************************ 00:33:35.052 START TEST nvmf_digest_error 00:33:35.052 ************************************ 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=471487 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 471487 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 471487 ']' 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.052 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.311 [2024-07-15 16:32:18.073088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:35.311 [2024-07-15 16:32:18.073169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.311 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.311 [2024-07-15 16:32:18.141965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.311 [2024-07-15 16:32:18.230783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.311 [2024-07-15 16:32:18.230847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.311 [2024-07-15 16:32:18.230864] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.311 [2024-07-15 16:32:18.230878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.311 [2024-07-15 16:32:18.230889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.311 [2024-07-15 16:32:18.230919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.311 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:35.311 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:35.311 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:35.311 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.311 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.569 [2024-07-15 16:32:18.311521] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.569 null0 00:33:35.569 [2024-07-15 16:32:18.426315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.569 [2024-07-15 16:32:18.450528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=471519 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 471519 /var/tmp/bperf.sock 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 471519 ']' 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.569 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:35.570 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.570 [2024-07-15 16:32:18.497414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:35.570 [2024-07-15 16:32:18.497487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471519 ] 00:33:35.570 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.827 [2024-07-15 16:32:18.561267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.827 [2024-07-15 16:32:18.662542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.827 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:35.827 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:35.827 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.827 16:32:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.085 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.342 nvme0n1 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.600 16:32:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.600 Running I/O for 2 seconds... 00:33:36.600 [2024-07-15 16:32:19.447825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.447884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.447903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.462614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.462649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.462668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.477314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.477349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.477368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.489021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.489049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.489064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.506787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.506817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.506833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.520993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.521052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.532454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.532508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.548537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.548571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.548596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.559800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.559828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.559843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.600 [2024-07-15 16:32:19.574422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.600 [2024-07-15 16:32:19.574456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.600 [2024-07-15 16:32:19.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.589012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.589040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.589072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.600905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.600933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.600963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.616561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.616595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.616613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.633178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.633232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.646699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.646735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.646763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.658726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.658791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.658807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.858 [2024-07-15 16:32:19.675290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.858 [2024-07-15 16:32:19.675329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.858 [2024-07-15 16:32:19.675349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.688183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.688216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.688235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.704826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.704856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.704872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.719425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.719460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.719479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.731534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.731568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.731587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.748283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.748316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.760998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.761043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.761059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.773906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.773933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.773964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.786329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.786363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.786382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.800084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.800118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.800137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.813801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.813829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.813860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:36.859 [2024-07-15 16:32:19.826206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:36.859 [2024-07-15 16:32:19.826239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:36.859 [2024-07-15 16:32:19.826258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.840544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.840579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.840598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.852135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.852171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.869136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.869169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.869188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.885841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.885870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.885885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.900408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.900441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.900460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.912525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.912564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.117 [2024-07-15 16:32:19.912584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.117 [2024-07-15 16:32:19.926444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.117 [2024-07-15 16:32:19.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.926496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:19.942170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:19.942203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.942222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:19.955590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:19.955624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.955643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:19.967011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:19.967038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.967069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:19.984172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:19.984207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.984226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:19.995325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:19.995359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:19.995378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.011686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.011750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.011772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.023538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.023574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.023595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.039797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.039827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.039857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.054222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.054255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.054275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.066201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.066235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.066255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.080011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.080054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.080074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.118 [2024-07-15 16:32:20.094769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.118 [2024-07-15 16:32:20.094816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.118 [2024-07-15 16:32:20.094834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.107122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.107155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.107174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.122895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.122923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.122955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.138955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.138983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.139014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.151128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.151162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.151186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.165234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.165267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.165286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.177519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.177552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.177570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.191072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.191119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.205163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.205198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.205217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.218234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.218266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.218285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.232363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.232397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.232416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.243943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.243970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.244002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.257829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.257856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.257888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.271973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.272005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.283923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.283949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.283980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.298104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.298137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.298157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.310372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.310407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.310426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.323124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.323158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.323177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.336985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.337012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.337043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-15 16:32:20.350999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.376 [2024-07-15 16:32:20.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-15 16:32:20.351057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.364420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.364453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.364472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.375292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.375324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.375343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.391821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.391848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.406532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.406565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.418555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.418587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.418606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.432320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.432354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.432372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.446934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.446962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.446993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.458059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.458087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.458117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.472764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.472818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.472848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.484430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.484464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.484483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.499762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.499829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.512987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.513015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.513047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.525875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.525904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.525920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.536262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.536321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.549612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.549639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.549670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.559891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.559919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.559949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.572145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.572173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.572204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.583302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.635 [2024-07-15 16:32:20.583330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.635 [2024-07-15 16:32:20.583360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.635 [2024-07-15 16:32:20.595303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.636 [2024-07-15 16:32:20.595330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.636 [2024-07-15 16:32:20.595362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.636 [2024-07-15 16:32:20.606757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.636 [2024-07-15 16:32:20.606785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.636 [2024-07-15 16:32:20.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.619870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.619899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.619916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.630332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.630360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.641359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.641386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.641418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.653895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.653923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.667827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.667855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.667888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.680451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.680478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.680510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.690573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.690600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.690630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.702799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.702828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.702851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.716504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.716530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.716562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.727879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.727907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.727940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.739908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.739936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.739968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.752883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.752911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.752942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.762213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.762240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.762271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.777135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.777163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.777194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.791164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.791191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.791223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.802023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.802083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.816782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.816821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.816853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.830066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.830094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.830125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.840708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.840756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.840776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.855078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.855120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.855136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.894 [2024-07-15 16:32:20.870565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:37.894 [2024-07-15 16:32:20.870593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.894 [2024-07-15 16:32:20.870625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.152 [2024-07-15 16:32:20.881550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.152 [2024-07-15 16:32:20.881579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.881610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.895876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.895904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.895936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.909278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.909305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.909337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.919974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.920002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.932103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.932131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.932162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.944079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.944107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.944138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.955472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.955499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.955530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.967966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.967995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.968027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.977641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.977668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.977699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:20.991987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:20.992016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:20.992051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.006901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.006961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.023221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.023248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.023280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.037146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.037174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.052270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.052331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.063359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.063386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.077868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.077896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.077928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.090136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.090163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.090194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.100498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.100526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.100558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.114463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.114491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.114522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.153 [2024-07-15 16:32:21.127505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.153 [2024-07-15 16:32:21.127532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.153 [2024-07-15 16:32:21.127563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.411 [2024-07-15 16:32:21.139588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.411 [2024-07-15 16:32:21.139615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.411 [2024-07-15 16:32:21.139646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.411 [2024-07-15 16:32:21.155174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.411 [2024-07-15 16:32:21.155201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.411 [2024-07-15 16:32:21.155232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.411 [2024-07-15 16:32:21.167839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.411 [2024-07-15 16:32:21.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.411 [2024-07-15 16:32:21.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.411 [2024-07-15 16:32:21.179553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.411 [2024-07-15 16:32:21.179580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.411 [2024-07-15 16:32:21.179611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.411 [2024-07-15 16:32:21.191986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.411 [2024-07-15 16:32:21.192014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.192046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.202717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.202767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.202784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.215996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.216025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.216057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.231068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.231098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.231115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.243509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.243537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.243554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.253702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.253760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.253795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.266968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.267013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.277749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.277778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.277795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.291731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.291767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.304297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.304325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.304341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.316162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.316191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.316207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.329189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.329217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.329234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.340970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.341000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.341016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.355448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.355476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.355493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.367770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.367822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.379340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.379370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.379386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.412 [2024-07-15 16:32:21.390000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.412 [2024-07-15 16:32:21.390045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.412 [2024-07-15 16:32:21.390062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.670 [2024-07-15 16:32:21.402469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.670 [2024-07-15 16:32:21.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.670 [2024-07-15 16:32:21.402513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.670 [2024-07-15 16:32:21.415096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.670 [2024-07-15 16:32:21.415124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.670 [2024-07-15 16:32:21.415141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.670 [2024-07-15 16:32:21.427615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208ab40) 00:33:38.670 [2024-07-15 16:32:21.427643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.670 [2024-07-15 16:32:21.427659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.670 00:33:38.670 Latency(us) 00:33:38.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.670 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:38.670 nvme0n1 : 2.01 19317.67 75.46 0.00 0.00 6616.43 3034.07 20777.34 00:33:38.670 =================================================================================================================== 00:33:38.670 Total : 19317.67 75.46 0.00 0.00 6616.43 3034.07 20777.34 00:33:38.670 0 00:33:38.670 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:38.670 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:38.670 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:38.670 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:38.670 | .driver_specific 00:33:38.670 | .nvme_error 00:33:38.670 | .status_code 00:33:38.670 | .command_transient_transport_error' 00:33:38.927 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:33:38.927 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 471519 00:33:38.927 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 471519 ']' 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 471519 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471519 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471519' 00:33:38.928 killing process with pid 471519 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 471519 00:33:38.928 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.928 00:33:38.928 Latency(us) 00:33:38.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.928 =================================================================================================================== 00:33:38.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.928 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 471519 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=471922 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 471922 /var/tmp/bperf.sock 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 471922 ']' 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:39.186 16:32:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.186 [2024-07-15 16:32:22.005622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:39.186 [2024-07-15 16:32:22.005709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471922 ] 00:33:39.186 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.186 Zero copy mechanism will not be used. 00:33:39.186 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.186 [2024-07-15 16:32:22.073802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.445 [2024-07-15 16:32:22.166409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.445 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:39.445 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:39.445 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.445 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.702 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.960 nvme0n1 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:39.960 16:32:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:40.218 Zero copy mechanism will not be used. 00:33:40.218 Running I/O for 2 seconds... 00:33:40.218 [2024-07-15 16:32:23.018060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.018129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.218 [2024-07-15 16:32:23.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.218 [2024-07-15 16:32:23.026357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.026393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.218 [2024-07-15 16:32:23.026413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.218 [2024-07-15 16:32:23.034503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.034538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.218 [2024-07-15 16:32:23.034557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.218 [2024-07-15 16:32:23.042236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.042272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.218 [2024-07-15 16:32:23.042292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.218 [2024-07-15 16:32:23.051493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.051529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.218 [2024-07-15 16:32:23.051548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.218 [2024-07-15 16:32:23.060490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.218 [2024-07-15 16:32:23.060536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.060555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.068768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.068812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.068828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.078161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.078208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.078228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.087981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.088018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.088033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.096876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.096905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.096921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.104314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.104349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.104368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.110606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.110634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.110649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.117242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.117293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.117323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.124043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.124072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.124102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.130635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.130669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.130688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.137917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.137946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.137963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.145946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.145977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.145995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.153637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.153672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.153692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.161476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.161511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.161529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.169452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.177067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.177101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.177120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.184920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.184952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.184969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.219 [2024-07-15 16:32:23.193043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.219 [2024-07-15 16:32:23.193073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.219 [2024-07-15 16:32:23.193106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.200668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.200704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.200723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.208575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.208610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.208629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.217618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.217651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.217669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.225540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.225574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.225594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.233643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.233677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.233696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.241952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.241983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.242000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.250607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.250642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.250669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.256068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.256117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.256136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.266385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.266420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.266439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.275254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.275289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.477 [2024-07-15 16:32:23.275308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.477 [2024-07-15 16:32:23.283074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.477 [2024-07-15 16:32:23.283109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.283132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.289882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.289909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.296668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.296700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.296719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.303392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.303425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.303444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.309884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.309911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.309941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.316667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.316715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.316735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.323027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.323053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.323084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.329842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.329868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.329897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.336490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.343026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.343052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.343067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.349755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.349802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.349817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.356275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.356306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.356329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.362924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.362959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.362990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.369530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.369569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.369586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.376034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.376077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.376095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.382536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.382568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.382586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.389442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.389475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.389494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.396729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.396792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.396808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.403331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.403363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.403380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.410655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.410688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.410707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.417665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.417699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.417718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.424543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.424577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.424595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.431126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.431160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.437674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.437706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.437725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.444340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.444373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.444392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.478 [2024-07-15 16:32:23.450864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.478 [2024-07-15 16:32:23.450892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.478 [2024-07-15 16:32:23.450924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.737 [2024-07-15 16:32:23.457784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.737 [2024-07-15 16:32:23.457812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.737 [2024-07-15 16:32:23.457828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.737 [2024-07-15 16:32:23.465127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.737 [2024-07-15 16:32:23.465161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.737 [2024-07-15 16:32:23.465179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.737 [2024-07-15 16:32:23.471425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.737 [2024-07-15 16:32:23.471457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.737 [2024-07-15 16:32:23.471476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.478545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.478579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.478597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.485275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.485308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.485327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.491918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.491945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.491975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.498594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.498625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.505256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.505307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.511723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.511762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.511795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.518413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.518445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.518463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.524989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.525016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.525053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.531960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.531988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.532019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.539573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.539626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.546344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.546376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.546401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.553009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.553052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.553067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.560185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.560219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.566787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.566814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.566844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.572599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.572631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.572650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.576725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.576764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.576798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.583165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.583198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.583217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.590343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.590376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.590395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.596859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.596886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.596916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.603713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.603782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.603799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.610665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.610698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.610717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.617980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.618007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.618043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.624768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.624818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.624834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.631182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.631220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.631238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.637753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.637797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.644534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.644566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.644584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.651140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.651172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.651194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.657704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.657736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.657763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.664179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.664211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.664229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.670983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.671012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.671042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.677766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.677807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.677822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.684407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.684442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.684461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.691133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.738 [2024-07-15 16:32:23.691166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.738 [2024-07-15 16:32:23.691184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.738 [2024-07-15 16:32:23.698675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.739 [2024-07-15 16:32:23.698718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.739 [2024-07-15 16:32:23.698744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.739 [2024-07-15 16:32:23.705584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.739 [2024-07-15 16:32:23.705620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.739 [2024-07-15 16:32:23.705639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.739 [2024-07-15 16:32:23.712157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.739 [2024-07-15 16:32:23.712198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.739 [2024-07-15 16:32:23.712217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.719873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.719903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.719925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.727987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.728015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.728050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.735830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.735866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.735896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.744706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.744746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.744767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.752517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.752551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.752570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.759648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.759682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.759701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.766270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.766331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.773279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.773312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.773330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:40.999 [2024-07-15 16:32:23.780136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:40.999 [2024-07-15 16:32:23.780169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.999 [2024-07-15 16:32:23.780187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.786759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.786811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.786828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.793948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.793975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.794006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.800872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.800900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.800930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.807476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.807509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.807528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.814026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.814070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.814088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.821108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.821141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.821160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.827894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.827922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.827952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.834843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.834870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.834901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.841359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.841392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.847939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.847965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.847996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.855125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.855159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.855186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.858830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.858857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.858897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.867113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.867157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.867176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.875667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.875699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.875718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.884383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.884417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.884435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.893476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.893508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.893526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.902901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.902928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.902958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.913271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.913305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.924250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.924286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.924305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.935181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.935213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.935231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.946057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.946084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.946114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.957583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.957615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.000 [2024-07-15 16:32:23.969297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.000 [2024-07-15 16:32:23.969331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.000 [2024-07-15 16:32:23.969350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:23.981133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:23.981168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:23.981187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:23.993067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:23.993100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:23.993119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.004456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.004489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.004507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.015569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.015603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.015622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.027544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.027579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.027597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.039485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.039519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.039538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.050909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.062307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.062340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.062359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.073720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.073761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.073794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.085284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.085316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.085334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.096845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.096872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.096902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.108324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.108356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.108381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.119828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.119855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.131133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.131165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.131183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.142472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.142504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.142523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.154293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.154327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.154345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.166423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.166458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.166477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.178007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.178035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.178066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.189526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.189559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.200304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.200340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.200359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.260 [2024-07-15 16:32:24.207753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.260 [2024-07-15 16:32:24.207792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.260 [2024-07-15 16:32:24.207825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.261 [2024-07-15 16:32:24.214901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.261 [2024-07-15 16:32:24.214928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.261 [2024-07-15 16:32:24.214959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.261 [2024-07-15 16:32:24.221259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.261 [2024-07-15 16:32:24.221291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.261 [2024-07-15 16:32:24.221309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.261 [2024-07-15 16:32:24.227997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.261 [2024-07-15 16:32:24.228036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.261 [2024-07-15 16:32:24.228067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.261 [2024-07-15 16:32:24.234858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.261 [2024-07-15 16:32:24.234886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.261 [2024-07-15 16:32:24.234916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.242548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.242593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.242611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.249981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.257349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.257382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.257400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.264726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.264783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.264801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.272484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.272511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.272542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.280907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.280939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.280956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.288995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.289043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.289062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.296756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.296802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.296819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.303879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.303910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.303927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.311914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.311945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.311963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.319159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.319193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.319212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.326286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.326318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.326336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.333482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.333516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.333541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.340697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.340732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.340761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.348880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.521 [2024-07-15 16:32:24.348909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.521 [2024-07-15 16:32:24.348925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.521 [2024-07-15 16:32:24.357052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.357087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.357106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.365646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.365680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.365699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.373746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.373795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.373811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.381400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.381434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.381452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.389412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.389453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.389472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.397885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.397945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.405517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.405557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.405577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.413944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.413973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.414004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.422646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.422680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.422699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.430909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.430937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.430967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.438805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.438832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.438862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.446604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.446655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.455052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.455084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.455102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.463592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.463623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.463642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.472084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.472123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.481029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.481055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.522 [2024-07-15 16:32:24.490248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.522 [2024-07-15 16:32:24.490280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.522 [2024-07-15 16:32:24.490298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.500487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.500520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.500539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.510718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.510760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.510793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.520945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.520971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.521001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.531331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.531363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.531382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.542417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.542449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.542467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.552835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.564425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.564463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.564483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.575560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.575592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.575610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.587114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.587147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.587165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.598279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.598311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.609681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.609713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.609732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.620686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.620719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.620746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.631872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.631898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.631928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.642955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.642983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.643013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.653911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.653937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.653966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.665307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.665340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.665358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.783 [2024-07-15 16:32:24.676980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.783 [2024-07-15 16:32:24.677006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.783 [2024-07-15 16:32:24.677039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.687786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.687814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.687844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.699176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.699210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.699228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.710444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.710475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.710493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.722194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.722227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.722245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.733599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.733633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.745422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.745455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.745473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:41.784 [2024-07-15 16:32:24.757025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:41.784 [2024-07-15 16:32:24.757059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.784 [2024-07-15 16:32:24.757084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.769884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.769943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.781253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.781285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.781303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.792905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.792933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.792965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.804657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.804690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.804709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.816202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.827491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.827523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.827541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.839367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.839400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.839418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.850292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.850325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.850343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.860501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.860540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.043 [2024-07-15 16:32:24.860560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.043 [2024-07-15 16:32:24.868503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.043 [2024-07-15 16:32:24.868537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.868556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.875869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.875897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.875927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.882986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.883013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.883050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.890935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.890962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.890993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.899148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.899180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.899198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.907215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.907247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.914534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.914566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.914585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.921793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.921834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.921850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.929867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.929921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.937392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.937424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.937442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.944538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.944569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.951615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.951647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.951665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.958496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.958529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.958547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.965387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.965418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.965436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.972396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.972426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.979340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.979371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.979390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.986351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.986382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:24.993274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:24.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:24.993323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:25.000259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:25.000291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:25.000309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:25.007229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:25.007260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:25.007278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.044 [2024-07-15 16:32:25.013419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12035d0) 00:33:42.044 [2024-07-15 16:32:25.013451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.044 [2024-07-15 16:32:25.013469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.044 00:33:42.044 Latency(us) 00:33:42.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.044 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:42.044 nvme0n1 : 2.00 3735.55 466.94 0.00 0.00 4276.89 855.61 12815.93 00:33:42.044 =================================================================================================================== 00:33:42.044 Total : 3735.55 466.94 0.00 0.00 4276.89 855.61 12815.93 00:33:42.044 0 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:42.302 | .driver_specific 00:33:42.302 | .nvme_error 00:33:42.302 | .status_code 00:33:42.302 | .command_transient_transport_error' 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 241 > 0 )) 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 471922 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 471922 ']' 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 471922 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:42.302 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471922 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471922' 00:33:42.559 killing process with pid 471922 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 471922 00:33:42.559 Received shutdown signal, test time was about 2.000000 seconds 00:33:42.559 00:33:42.559 Latency(us) 00:33:42.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.559 =================================================================================================================== 00:33:42.559 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 471922 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=472449 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 472449 /var/tmp/bperf.sock 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 472449 ']' 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:42.559 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.816 [2024-07-15 16:32:25.551331] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:42.816 [2024-07-15 16:32:25.551409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472449 ] 00:33:42.816 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.816 [2024-07-15 16:32:25.610004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.816 [2024-07-15 16:32:25.696209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.816 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:42.816 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:42.816 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:42.816 16:32:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.076 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.642 nvme0n1 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:43.643 16:32:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.935 Running I/O for 2 seconds... 00:33:43.935 [2024-07-15 16:32:26.698601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e3060 00:33:43.935 [2024-07-15 16:32:26.699461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.699506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.711964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e0ea0 00:33:43.935 [2024-07-15 16:32:26.713047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.713075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.725469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e6fa8 00:33:43.935 [2024-07-15 16:32:26.726697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.726729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.737372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f4f40 00:33:43.935 [2024-07-15 16:32:26.738406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.738438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.749529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190df118 00:33:43.935 [2024-07-15 16:32:26.750407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.750439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.764383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ef270 00:33:43.935 [2024-07-15 16:32:26.765793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.765826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.776441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f92c0 00:33:43.935 [2024-07-15 16:32:26.777804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.777830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.790542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190df118 00:33:43.935 [2024-07-15 16:32:26.792130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.792161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.803103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190de038 00:33:43.935 [2024-07-15 16:32:26.804644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.815663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb8b8 00:33:43.935 [2024-07-15 16:32:26.817206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.817236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.829786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fcdd0 00:33:43.935 [2024-07-15 16:32:26.832019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.832045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.838841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb480 00:33:43.935 [2024-07-15 16:32:26.839872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.935 [2024-07-15 16:32:26.839897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:43.935 [2024-07-15 16:32:26.853277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb8b8 00:33:43.935 [2024-07-15 16:32:26.854494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.936 [2024-07-15 16:32:26.854525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.936 [2024-07-15 16:32:26.866129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ed920 00:33:43.936 [2024-07-15 16:32:26.867656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.936 [2024-07-15 16:32:26.867686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.936 [2024-07-15 16:32:26.879129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e01f8 00:33:43.936 [2024-07-15 16:32:26.880814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.936 [2024-07-15 16:32:26.880839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.936 [2024-07-15 16:32:26.890923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e5658 00:33:43.936 [2024-07-15 16:32:26.892438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.936 [2024-07-15 16:32:26.892468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.936 [2024-07-15 16:32:26.903921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fcdd0 00:33:43.936 [2024-07-15 16:32:26.905458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:43.936 [2024-07-15 16:32:26.905489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.915847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eaab8 00:33:44.196 [2024-07-15 16:32:26.917418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.917449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.929182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e1710 00:33:44.196 [2024-07-15 16:32:26.930843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.930868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.940914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb8b8 00:33:44.196 [2024-07-15 16:32:26.942114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.942145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.953788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f9f68 00:33:44.196 [2024-07-15 16:32:26.954781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.954807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.966989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eb760 00:33:44.196 [2024-07-15 16:32:26.968123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.968155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.978919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e8d30 00:33:44.196 [2024-07-15 16:32:26.980861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.980887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:26.990929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e0630 00:33:44.196 [2024-07-15 16:32:26.991963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:26.991989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.002958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190dece0 00:33:44.196 [2024-07-15 16:32:27.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.004007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.017164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fe720 00:33:44.196 [2024-07-15 16:32:27.018342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.018372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.030304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e1710 00:33:44.196 [2024-07-15 16:32:27.031629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.031658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.041733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb048 00:33:44.196 [2024-07-15 16:32:27.042970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.042996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.053235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fe720 00:33:44.196 [2024-07-15 16:32:27.054443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.054469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.064110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb8b8 00:33:44.196 [2024-07-15 16:32:27.064903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.064928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.075449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f5378 00:33:44.196 [2024-07-15 16:32:27.076542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.076566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.086655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190edd58 00:33:44.196 [2024-07-15 16:32:27.087757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.087788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.098051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e8d30 00:33:44.196 [2024-07-15 16:32:27.099117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.099141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.109171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e9e10 00:33:44.196 [2024-07-15 16:32:27.110256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.110280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.120277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e3498 00:33:44.196 [2024-07-15 16:32:27.121370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.196 [2024-07-15 16:32:27.121394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.196 [2024-07-15 16:32:27.131368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f6020 00:33:44.196 [2024-07-15 16:32:27.132472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.197 [2024-07-15 16:32:27.132496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.197 [2024-07-15 16:32:27.142572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc998 00:33:44.197 [2024-07-15 16:32:27.143650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.197 [2024-07-15 16:32:27.143675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.197 [2024-07-15 16:32:27.153678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fef90 00:33:44.197 [2024-07-15 16:32:27.154775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.197 [2024-07-15 16:32:27.154801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.197 [2024-07-15 16:32:27.164808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7da8 00:33:44.197 [2024-07-15 16:32:27.165879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.197 [2024-07-15 16:32:27.165904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.454 [2024-07-15 16:32:27.176428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f3a28 00:33:44.454 [2024-07-15 16:32:27.177519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.454 [2024-07-15 16:32:27.177543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.454 [2024-07-15 16:32:27.187786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f6890 00:33:44.454 [2024-07-15 16:32:27.188855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.454 [2024-07-15 16:32:27.188879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.454 [2024-07-15 16:32:27.198890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e12d8 00:33:44.454 [2024-07-15 16:32:27.199973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.454 [2024-07-15 16:32:27.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.454 [2024-07-15 16:32:27.210053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ea680 00:33:44.454 [2024-07-15 16:32:27.211111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.454 [2024-07-15 16:32:27.211137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.221471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ee5c8 00:33:44.455 [2024-07-15 16:32:27.222559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.222583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.232573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190de038 00:33:44.455 [2024-07-15 16:32:27.233674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.233698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.243688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e23b8 00:33:44.455 [2024-07-15 16:32:27.244777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.244802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.254794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eea00 00:33:44.455 [2024-07-15 16:32:27.255866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.255891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.265916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e6b70 00:33:44.455 [2024-07-15 16:32:27.266999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.267039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.278678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e88f8 00:33:44.455 [2024-07-15 16:32:27.280393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.280418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.289327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7100 00:33:44.455 [2024-07-15 16:32:27.290526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.290551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.300483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f20d8 00:33:44.455 [2024-07-15 16:32:27.301717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.301765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.311860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f9b30 00:33:44.455 [2024-07-15 16:32:27.313090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.323020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f4b08 00:33:44.455 [2024-07-15 16:32:27.324268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.324292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.334220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc128 00:33:44.455 [2024-07-15 16:32:27.335439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.335464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.345302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e38d0 00:33:44.455 [2024-07-15 16:32:27.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.356402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eb760 00:33:44.455 [2024-07-15 16:32:27.357624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.367496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f92c0 00:33:44.455 [2024-07-15 16:32:27.368747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.368772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.378611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190efae0 00:33:44.455 [2024-07-15 16:32:27.379842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.379873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.389805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e6738 00:33:44.455 [2024-07-15 16:32:27.391067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.391091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.403115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ecc78 00:33:44.455 [2024-07-15 16:32:27.405073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.405118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.416266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ee5c8 00:33:44.455 [2024-07-15 16:32:27.418390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.418421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.455 [2024-07-15 16:32:27.425142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ea248 00:33:44.455 [2024-07-15 16:32:27.426085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.455 [2024-07-15 16:32:27.426115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.438448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190de038 00:33:44.713 [2024-07-15 16:32:27.439582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.439613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.451111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e4140 00:33:44.713 [2024-07-15 16:32:27.452232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.452262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.463978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f3e60 00:33:44.713 [2024-07-15 16:32:27.465302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.478325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f57b0 00:33:44.713 [2024-07-15 16:32:27.480253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.480284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.491392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e4de8 00:33:44.713 [2024-07-15 16:32:27.493528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.493558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.500267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e2c28 00:33:44.713 [2024-07-15 16:32:27.501187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.501217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.512096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f20d8 00:33:44.713 [2024-07-15 16:32:27.513014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.525129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f2948 00:33:44.713 [2024-07-15 16:32:27.526219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.538186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ee190 00:33:44.713 [2024-07-15 16:32:27.539445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.539476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.551279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e1f80 00:33:44.713 [2024-07-15 16:32:27.552720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.552758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.564358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e5658 00:33:44.713 [2024-07-15 16:32:27.565964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.565989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.577462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f2948 00:33:44.713 [2024-07-15 16:32:27.579238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.579269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.590543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f8618 00:33:44.713 [2024-07-15 16:32:27.592492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.592523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.603657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190edd58 00:33:44.713 [2024-07-15 16:32:27.605772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.605814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.612542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eaef0 00:33:44.713 [2024-07-15 16:32:27.613505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.613535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.624420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e1710 00:33:44.713 [2024-07-15 16:32:27.625362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.625391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.637656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ddc00 00:33:44.713 [2024-07-15 16:32:27.638769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.638812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.650839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190df118 00:33:44.713 [2024-07-15 16:32:27.652103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.652134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.663978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7970 00:33:44.713 [2024-07-15 16:32:27.665429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.665460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:44.713 [2024-07-15 16:32:27.677167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e6fa8 00:33:44.713 [2024-07-15 16:32:27.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.713 [2024-07-15 16:32:27.678693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:44.714 [2024-07-15 16:32:27.690262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ddc00 00:33:44.714 [2024-07-15 16:32:27.692097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.714 [2024-07-15 16:32:27.692128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.703909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7538 00:33:44.971 [2024-07-15 16:32:27.705934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.705968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.717549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fda78 00:33:44.971 [2024-07-15 16:32:27.719720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.719783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.726760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e2c28 00:33:44.971 [2024-07-15 16:32:27.727695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.727725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.740366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f5378 00:33:44.971 [2024-07-15 16:32:27.741511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.741541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.755260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ef6a8 00:33:44.971 [2024-07-15 16:32:27.757058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.757098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.768768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e0ea0 00:33:44.971 [2024-07-15 16:32:27.770804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.770832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.782413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ed920 00:33:44.971 [2024-07-15 16:32:27.784682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.784719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.791728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e12d8 00:33:44.971 [2024-07-15 16:32:27.792672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.805301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eb328 00:33:44.971 [2024-07-15 16:32:27.806445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.818298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e49b0 00:33:44.971 [2024-07-15 16:32:27.819351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.819382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.830996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e2c28 00:33:44.971 [2024-07-15 16:32:27.832068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.832120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.843914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ef270 00:33:44.971 [2024-07-15 16:32:27.844832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.844865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.859390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f1ca0 00:33:44.971 [2024-07-15 16:32:27.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.861622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.868619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fe2e8 00:33:44.971 [2024-07-15 16:32:27.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.869643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.883066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc998 00:33:44.971 [2024-07-15 16:32:27.884695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.884726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:44.971 [2024-07-15 16:32:27.896349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190eb760 00:33:44.971 [2024-07-15 16:32:27.898008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.971 [2024-07-15 16:32:27.898051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:44.972 [2024-07-15 16:32:27.909442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7538 00:33:44.972 [2024-07-15 16:32:27.911311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.972 [2024-07-15 16:32:27.911345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:44.972 [2024-07-15 16:32:27.922219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f96f8 00:33:44.972 [2024-07-15 16:32:27.924143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.972 [2024-07-15 16:32:27.924181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:44.972 [2024-07-15 16:32:27.932654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fcdd0 00:33:44.972 [2024-07-15 16:32:27.933614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.972 [2024-07-15 16:32:27.933648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:44.972 [2024-07-15 16:32:27.945815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190de470 00:33:44.972 [2024-07-15 16:32:27.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.972 [2024-07-15 16:32:27.946832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:27.960363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fe720 00:33:45.230 [2024-07-15 16:32:27.962480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:27.962513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:27.969275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f9b30 00:33:45.230 [2024-07-15 16:32:27.970076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:27.970114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:27.981217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f8618 00:33:45.230 [2024-07-15 16:32:27.982024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:27.982077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:27.994339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f1ca0 00:33:45.230 [2024-07-15 16:32:27.995351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:27.995390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:28.008257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e0a68 00:33:45.230 [2024-07-15 16:32:28.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:28.009541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:28.021156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e7818 00:33:45.230 [2024-07-15 16:32:28.022523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:28.022553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:28.033010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc560 00:33:45.230 [2024-07-15 16:32:28.034432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:28.034480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:28.046214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ed0b0 00:33:45.230 [2024-07-15 16:32:28.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.230 [2024-07-15 16:32:28.047844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:45.230 [2024-07-15 16:32:28.057928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e88f8 00:33:45.230 [2024-07-15 16:32:28.058910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.058940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.070267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f1ca0 00:33:45.231 [2024-07-15 16:32:28.071319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.071356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.083110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e7818 00:33:45.231 [2024-07-15 16:32:28.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.084314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.094955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f2948 00:33:45.231 [2024-07-15 16:32:28.096101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.096137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.108322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f2510 00:33:45.231 [2024-07-15 16:32:28.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.109748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.119958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7538 00:33:45.231 [2024-07-15 16:32:28.120818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.120844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.132639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f8e88 00:33:45.231 [2024-07-15 16:32:28.133334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.133371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.145768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fa7d8 00:33:45.231 [2024-07-15 16:32:28.146592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.146623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.158981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e5ec8 00:33:45.231 [2024-07-15 16:32:28.159975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.160006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.170475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e88f8 00:33:45.231 [2024-07-15 16:32:28.171753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.171785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.184070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fcdd0 00:33:45.231 [2024-07-15 16:32:28.185144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.185186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.195933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190dfdc0 00:33:45.231 [2024-07-15 16:32:28.197670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.197700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:45.231 [2024-07-15 16:32:28.206767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fb480 00:33:45.231 [2024-07-15 16:32:28.207625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.231 [2024-07-15 16:32:28.207654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.220923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ed4e8 00:33:45.491 [2024-07-15 16:32:28.221939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.221965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.233550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fe720 00:33:45.491 [2024-07-15 16:32:28.234622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.234653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.246145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f0788 00:33:45.491 [2024-07-15 16:32:28.247205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.247235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.260284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f5be8 00:33:45.491 [2024-07-15 16:32:28.261998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.272005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f2d80 00:33:45.491 [2024-07-15 16:32:28.273178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.273209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.284753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc998 00:33:45.491 [2024-07-15 16:32:28.285797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.285827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.296568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f6cc8 00:33:45.491 [2024-07-15 16:32:28.298337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.298368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.308179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e3060 00:33:45.491 [2024-07-15 16:32:28.309073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.309103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.319971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e4de8 00:33:45.491 [2024-07-15 16:32:28.320812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.320836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.333991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e95a0 00:33:45.491 [2024-07-15 16:32:28.335021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.335064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.347073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e8088 00:33:45.491 [2024-07-15 16:32:28.348293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.348324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.358876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e7c50 00:33:45.491 [2024-07-15 16:32:28.359882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.359918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.371871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190df118 00:33:45.491 [2024-07-15 16:32:28.372871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.372896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.384393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f5be8 00:33:45.491 [2024-07-15 16:32:28.385438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.396971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ea248 00:33:45.491 [2024-07-15 16:32:28.398006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.398036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.409533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190efae0 00:33:45.491 [2024-07-15 16:32:28.410559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.410589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.422219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ea680 00:33:45.491 [2024-07-15 16:32:28.423239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.423270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.434821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fdeb0 00:33:45.491 [2024-07-15 16:32:28.435861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.435892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.447398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fc560 00:33:45.491 [2024-07-15 16:32:28.448391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.448422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.491 [2024-07-15 16:32:28.459905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e7c50 00:33:45.491 [2024-07-15 16:32:28.460920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.491 [2024-07-15 16:32:28.460950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.472593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e1710 00:33:45.750 [2024-07-15 16:32:28.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.473717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.485431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e3060 00:33:45.750 [2024-07-15 16:32:28.486493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.486524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.498025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e4578 00:33:45.750 [2024-07-15 16:32:28.499064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.499096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.510590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f9f68 00:33:45.750 [2024-07-15 16:32:28.511646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.511676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.523232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f96f8 00:33:45.750 [2024-07-15 16:32:28.524271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.524301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.535831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fcdd0 00:33:45.750 [2024-07-15 16:32:28.536839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.536870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.548370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190e99d8 00:33:45.750 [2024-07-15 16:32:28.549370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.549400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.560880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fac10 00:33:45.750 [2024-07-15 16:32:28.561897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.561922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.573411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f0ff8 00:33:45.750 [2024-07-15 16:32:28.574431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.574460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.585972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f1868 00:33:45.750 [2024-07-15 16:32:28.587041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.587071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.600207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f7100 00:33:45.750 [2024-07-15 16:32:28.601916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.601941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.611910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fda78 00:33:45.750 [2024-07-15 16:32:28.613099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.613130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.624321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190ff3c8 00:33:45.750 [2024-07-15 16:32:28.625549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.625579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.636960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190fa7d8 00:33:45.750 [2024-07-15 16:32:28.638155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.638185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.750 [2024-07-15 16:32:28.649511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f81e0 00:33:45.750 [2024-07-15 16:32:28.650759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.750 [2024-07-15 16:32:28.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.751 [2024-07-15 16:32:28.662118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f35f0 00:33:45.751 [2024-07-15 16:32:28.663310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.751 [2024-07-15 16:32:28.663340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.751 [2024-07-15 16:32:28.674663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190f6890 00:33:45.751 [2024-07-15 16:32:28.675877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.751 [2024-07-15 16:32:28.675903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.751 [2024-07-15 16:32:28.687217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a694a0) with pdu=0x2000190dece0 00:33:45.751 [2024-07-15 16:32:28.688428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.751 [2024-07-15 16:32:28.688463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.751 00:33:45.751 Latency(us) 00:33:45.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.751 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.751 nvme0n1 : 2.01 20651.04 80.67 0.00 0.00 6188.26 2512.21 15340.28 00:33:45.751 =================================================================================================================== 00:33:45.751 Total : 20651.04 80.67 0.00 0.00 6188.26 2512.21 15340.28 00:33:45.751 0 00:33:45.751 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:45.751 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:45.751 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:45.751 | .driver_specific 00:33:45.751 | .nvme_error 00:33:45.751 | .status_code 00:33:45.751 | .command_transient_transport_error' 00:33:45.751 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 472449 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 472449 ']' 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 472449 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:46.009 16:32:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 472449 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 472449' 00:33:46.267 killing process with pid 472449 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 472449 00:33:46.267 Received shutdown signal, test time was about 2.000000 seconds 00:33:46.267 00:33:46.267 Latency(us) 00:33:46.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.267 =================================================================================================================== 00:33:46.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 472449 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=472853 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 472853 /var/tmp/bperf.sock 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 472853 ']' 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:46.267 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.527 [2024-07-15 16:32:29.280673] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:46.527 [2024-07-15 16:32:29.280770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472853 ] 00:33:46.527 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.527 Zero copy mechanism will not be used. 00:33:46.527 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.527 [2024-07-15 16:32:29.345833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.527 [2024-07-15 16:32:29.437412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.786 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:46.786 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:46.786 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:46.786 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.043 16:32:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:47.612 nvme0n1 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:47.612 16:32:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:47.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.612 Zero copy mechanism will not be used. 00:33:47.612 Running I/O for 2 seconds... 00:33:47.612 [2024-07-15 16:32:30.459863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.460228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.460275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.469678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.469844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.469889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.480175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.480511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.480539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.489693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.490032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.490061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.499779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.500218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.500245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.509986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.510307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.510335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.519799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.520152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.520193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.529369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.529781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.539975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.540283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.540311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.549643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.550010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.558214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.558540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.558567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.566241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.566566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.566593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.574349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.574806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.574833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.582597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.582996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.583030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.612 [2024-07-15 16:32:30.589823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.612 [2024-07-15 16:32:30.590236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.612 [2024-07-15 16:32:30.590277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.596824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.597137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.603581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.603944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.603969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.610387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.610837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.610866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.617622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.618024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.624789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.625085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.625111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.631275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.631634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.631660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.637831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.638114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.638139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.644479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.644851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.644877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.651235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.651538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.651563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.657609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.657935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.657962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.664120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.664422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.872 [2024-07-15 16:32:30.664448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.872 [2024-07-15 16:32:30.670752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.872 [2024-07-15 16:32:30.671127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.671170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.677570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.677872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.677913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.684175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.684476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.684502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.690857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.691138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.691164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.697370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.697669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.697695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.703980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.704365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.704396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.710760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.711060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.711085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.717356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.717650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.717676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.723817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.724097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.724123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.730169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.730463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.730488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.737213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.737539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.743670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.743981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.744007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.750250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.750613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.750639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.757051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.757345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.757370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.763746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.764120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.764164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.771587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.771894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.771920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.779505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.779854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.779879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.787392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.787506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.787536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.796344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.796649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.796675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.805247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.814398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.814752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.814794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.824283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.824626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.824652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.833764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.834148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.834174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.873 [2024-07-15 16:32:30.843281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:47.873 [2024-07-15 16:32:30.843655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.873 [2024-07-15 16:32:30.843680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.852847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.853175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.862060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.862354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.862380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.870258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.870650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.870675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.878362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.878787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.878813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.886930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.887218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.887244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.132 [2024-07-15 16:32:30.895723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.132 [2024-07-15 16:32:30.896067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.132 [2024-07-15 16:32:30.896093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.904480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.904840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.904866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.911678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.911984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.912011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.918967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.919247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.919273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.925956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.926238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.926264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.932596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.932912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.932939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.941142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.941426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.941452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.948503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.948812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.948837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.955418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.955715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.955746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.962374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.962728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.962761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.969507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.969883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.969909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.977183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.977575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.977600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.985567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.985943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.985969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:30.993721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:30.994103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:30.994128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.001528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.001903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.009894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.010179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.010204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.016687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.017084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.017126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.023641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.023965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.030827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.031093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.031119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.038124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.038423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.038449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.045916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.046211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.046236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.054035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.054351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.062381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.062747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.062797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.070392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.070734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.070772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.078475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.078801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.086125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.086413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.086439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.093514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.093818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.093844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.100930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.101236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.133 [2024-07-15 16:32:31.108843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.133 [2024-07-15 16:32:31.109119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.133 [2024-07-15 16:32:31.109144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.393 [2024-07-15 16:32:31.116724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.393 [2024-07-15 16:32:31.117042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.393 [2024-07-15 16:32:31.117067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.393 [2024-07-15 16:32:31.124397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.393 [2024-07-15 16:32:31.124683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.393 [2024-07-15 16:32:31.124709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.393 [2024-07-15 16:32:31.133335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.393 [2024-07-15 16:32:31.133683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.393 [2024-07-15 16:32:31.133715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.393 [2024-07-15 16:32:31.142650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.393 [2024-07-15 16:32:31.142977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.393 [2024-07-15 16:32:31.143005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.393 [2024-07-15 16:32:31.151872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.152203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.152229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.159751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.160070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.160097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.166388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.166687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.166712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.172809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.173161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.173188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.179288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.179613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.179644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.185839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.186163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.186194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.192221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.192554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.192585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.199783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.200112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.206640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.206991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.213119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.213454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.213486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.219457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.219825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.219852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.226709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.227038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.227081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.234113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.234450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.234481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.242114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.242466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.242498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.249809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.250125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.250156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.258859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.259213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.259244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.267825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.268169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.268201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.277182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.277550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.277581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.285805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.286166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.286198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.292507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.292858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.292885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.300002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.300358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.300390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.308005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.308441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.308472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.315387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.315711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.315749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.323751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.324162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.330661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.330977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.331005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.337435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.337880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.337906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.344788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.345111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.345142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.351435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.351791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.351817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.394 [2024-07-15 16:32:31.359526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.394 [2024-07-15 16:32:31.359928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.394 [2024-07-15 16:32:31.359968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.395 [2024-07-15 16:32:31.367164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.395 [2024-07-15 16:32:31.367501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.395 [2024-07-15 16:32:31.367532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.654 [2024-07-15 16:32:31.374323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.374661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.374692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.381381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.381798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.381839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.388150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.388484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.388515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.395117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.395476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.395525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.401807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.402223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.402254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.408425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.408845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.408886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.415137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.415584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.422561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.422973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.423012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.431078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.431432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.431464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.438674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.439054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.439099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.447453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.447810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.447837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.456150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.456486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.456517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.464697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.465134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.465165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.473525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.473867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.473894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.482001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.482353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.482385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.490834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.491263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.491295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.499654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.499862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.499888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.508798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.509247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.517666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.518065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.518107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.526515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.526861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.535403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.535783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.535815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.542724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.543069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.543100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.551602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.551934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.551961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.560455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.560775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.560802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.567519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.567859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.567887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.574312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.574691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.580960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.581312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.581345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.587699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.588101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.588128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.595048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.595372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.595398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.603135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.603461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.603488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.610455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.610822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.610850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.617539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.617865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.617892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.625255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.655 [2024-07-15 16:32:31.625559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.655 [2024-07-15 16:32:31.625586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.655 [2024-07-15 16:32:31.633576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.915 [2024-07-15 16:32:31.633923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.915 [2024-07-15 16:32:31.633952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.915 [2024-07-15 16:32:31.640527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.915 [2024-07-15 16:32:31.640912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.915 [2024-07-15 16:32:31.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.915 [2024-07-15 16:32:31.647917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.915 [2024-07-15 16:32:31.648209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.915 [2024-07-15 16:32:31.648235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.915 [2024-07-15 16:32:31.654552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.915 [2024-07-15 16:32:31.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.915 [2024-07-15 16:32:31.654886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.915 [2024-07-15 16:32:31.662119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.662519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.662552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.670003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.670327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.670353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.677682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.678109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.678136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.685503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.685922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.685950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.693735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.694048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.694074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.702010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.702398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.710146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.710465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.710491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.718358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.718772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.728203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.728536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.728571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.736990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.737285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.737323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.745462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.745777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.754270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.754657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.754699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.762780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.763089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.763116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.771129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.771241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.771267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.778912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.779288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.779328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.787211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.787580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.787606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.795157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.795545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.795586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.802807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.803226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.811019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.811439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.819449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.819776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.819803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.827327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.827721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.827754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.835662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.836069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.836096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.843090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.843412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.843438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.850765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.851165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.851191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.858162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.858480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.858507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.866612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.866928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.866955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.874962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.875358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.875384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.883156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.916 [2024-07-15 16:32:31.883463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.916 [2024-07-15 16:32:31.883490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.916 [2024-07-15 16:32:31.891832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:48.917 [2024-07-15 16:32:31.892179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.917 [2024-07-15 16:32:31.892221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.900371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.900765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.900792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.908968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.909274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.909300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.917125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.917525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.917565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.925209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.925323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.925349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.933183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.933604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.933630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.940932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.941228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.941254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.949368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.957516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.957832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.957870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.965576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.965893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.965920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.973996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.974311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.974337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.982343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.982730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.982763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.990001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.990366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.990406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:31.998475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:31.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:31.998906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.006810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.007104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.007131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.013922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.014220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.014246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.021551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.021939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.021966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.028886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.029198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.029224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.035570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.035958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.035985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.042957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.043268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.043295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.049734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.050047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.050074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.056613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.056946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.056974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.064928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.065260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.065286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.072180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.072510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.078567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.177 [2024-07-15 16:32:32.078874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.177 [2024-07-15 16:32:32.078909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.177 [2024-07-15 16:32:32.085427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.085829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.094014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.094340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.094366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.102212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.102525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.102552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.109450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.109760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.117072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.117375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.117401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.124683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.125035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.131325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.131630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.131656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.138134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.138521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.138561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.145377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.145748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.145775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.178 [2024-07-15 16:32:32.152645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.178 [2024-07-15 16:32:32.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.178 [2024-07-15 16:32:32.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.160227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.160658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.160693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.167385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.167792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.167836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.174892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.175219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.175263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.182394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.182693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.182721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.188660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.188985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.189013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.195155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.195460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.195487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.201218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.201513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.201541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.207538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.207865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.207893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.215079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.215394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.215429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.221597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.221941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.221970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.227538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.227844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.227872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.233656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.233987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.234016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.239802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.240142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.240169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.245847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.246186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.252110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.252402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.252428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.258494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.258812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.258848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.264507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.264848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.264877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.270316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.270623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.270650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.456 [2024-07-15 16:32:32.276210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.456 [2024-07-15 16:32:32.276505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.456 [2024-07-15 16:32:32.276532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.282187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.282555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.288147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.288472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.288499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.295357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.295656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.295683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.302338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.302632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.302659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.309813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.310132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.310164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.316521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.316856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.323457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.323783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.323811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.330970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.331264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.331291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.339079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.339405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.339433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.346035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.346356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.346383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.353210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.353557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.353583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.360858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.361203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.361231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.367888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.368203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.368229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.374290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.374582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.374609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.381097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.381405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.381432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.388153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.388461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.388488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.394524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.394854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.394882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.402218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.402511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.409014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.409312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.409339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.415431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.457 [2024-07-15 16:32:32.415772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.457 [2024-07-15 16:32:32.415801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.457 [2024-07-15 16:32:32.421939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.723 [2024-07-15 16:32:32.422251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.723 [2024-07-15 16:32:32.422300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.723 [2024-07-15 16:32:32.429662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.723 [2024-07-15 16:32:32.429999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.723 [2024-07-15 16:32:32.430035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.723 [2024-07-15 16:32:32.436623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.723 [2024-07-15 16:32:32.436948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.723 [2024-07-15 16:32:32.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.723 [2024-07-15 16:32:32.443360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.723 [2024-07-15 16:32:32.443661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.723 [2024-07-15 16:32:32.443688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.723 [2024-07-15 16:32:32.449685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a69850) with pdu=0x2000190fef90 00:33:49.723 [2024-07-15 16:32:32.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.723 [2024-07-15 16:32:32.449855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.723 00:33:49.723 Latency(us) 00:33:49.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.723 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:49.723 nvme0n1 : 2.00 4054.60 506.83 0.00 0.00 3937.32 2767.08 13495.56 00:33:49.724 =================================================================================================================== 00:33:49.724 Total : 4054.60 506.83 0.00 0.00 3937.32 2767.08 13495.56 00:33:49.724 0 00:33:49.724 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:49.724 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:49.724 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:49.724 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:49.724 | .driver_specific 00:33:49.724 | .nvme_error 00:33:49.724 | .status_code 00:33:49.724 | .command_transient_transport_error' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 262 > 0 )) 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 472853 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 472853 ']' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 472853 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 472853 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 472853' 00:33:49.983 killing process with pid 472853 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 472853 00:33:49.983 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.983 00:33:49.983 Latency(us) 00:33:49.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.983 =================================================================================================================== 00:33:49.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 472853 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 471487 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 471487 ']' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 471487 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:49.983 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471487 00:33:50.242 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:50.242 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:50.242 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471487' 00:33:50.242 killing process with pid 471487 00:33:50.242 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 471487 00:33:50.242 16:32:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 471487 00:33:50.500 00:33:50.500 real 0m15.203s 00:33:50.500 user 0m29.568s 00:33:50.500 sys 0m4.816s 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.500 ************************************ 00:33:50.500 END TEST nvmf_digest_error 00:33:50.500 ************************************ 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:50.500 rmmod nvme_tcp 00:33:50.500 rmmod nvme_fabrics 00:33:50.500 rmmod nvme_keyring 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 471487 ']' 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 471487 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 471487 ']' 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 471487 00:33:50.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (471487) - No such process 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 471487 is not found' 00:33:50.500 Process with pid 471487 is not found 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.500 16:32:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.402 16:32:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.402 00:33:52.402 real 0m34.670s 00:33:52.402 user 0m59.856s 00:33:52.402 sys 0m11.023s 00:33:52.402 16:32:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:52.402 16:32:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:52.402 ************************************ 00:33:52.402 END TEST nvmf_digest 00:33:52.402 ************************************ 00:33:52.402 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:52.402 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:52.402 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:52.402 16:32:35 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:52.402 16:32:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:52.402 16:32:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:52.402 16:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.659 ************************************ 00:33:52.659 START TEST nvmf_bdevperf 00:33:52.659 ************************************ 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:52.659 * Looking for test storage... 00:33:52.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.659 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:52.660 16:32:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:54.564 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.564 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:54.565 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:54.565 Found net devices under 0000:84:00.0: cvl_0_0 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:54.565 Found net devices under 0000:84:00.1: cvl_0_1 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.565 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.823 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:33:54.824 00:33:54.824 --- 10.0.0.2 ping statistics --- 00:33:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.824 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:33:54.824 00:33:54.824 --- 10.0.0.1 ping statistics --- 00:33:54.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.824 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=475221 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 475221 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 475221 ']' 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:54.824 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:54.824 [2024-07-15 16:32:37.706341] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:54.824 [2024-07-15 16:32:37.706421] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.824 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.824 [2024-07-15 16:32:37.775962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:55.083 [2024-07-15 16:32:37.869743] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.083 [2024-07-15 16:32:37.869808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.083 [2024-07-15 16:32:37.869825] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.083 [2024-07-15 16:32:37.869838] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.083 [2024-07-15 16:32:37.869850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.083 [2024-07-15 16:32:37.870303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.083 [2024-07-15 16:32:37.870362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.083 [2024-07-15 16:32:37.870365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.083 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:55.083 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:55.083 16:32:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:55.083 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.083 16:32:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.083 [2024-07-15 16:32:38.014998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.083 Malloc0 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:55.083 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.342 [2024-07-15 16:32:38.074371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:55.342 { 00:33:55.342 "params": { 00:33:55.342 "name": "Nvme$subsystem", 00:33:55.342 "trtype": "$TEST_TRANSPORT", 00:33:55.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.342 "adrfam": "ipv4", 00:33:55.342 "trsvcid": "$NVMF_PORT", 00:33:55.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.342 "hdgst": ${hdgst:-false}, 00:33:55.342 "ddgst": ${ddgst:-false} 00:33:55.342 }, 00:33:55.342 "method": "bdev_nvme_attach_controller" 00:33:55.342 } 00:33:55.342 EOF 00:33:55.342 )") 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:55.342 16:32:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:55.342 "params": { 00:33:55.342 "name": "Nvme1", 00:33:55.342 "trtype": "tcp", 00:33:55.342 "traddr": "10.0.0.2", 00:33:55.342 "adrfam": "ipv4", 00:33:55.342 "trsvcid": "4420", 00:33:55.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.342 "hdgst": false, 00:33:55.342 "ddgst": false 00:33:55.342 }, 00:33:55.342 "method": "bdev_nvme_attach_controller" 00:33:55.342 }' 00:33:55.342 [2024-07-15 16:32:38.121151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:55.342 [2024-07-15 16:32:38.121222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475362 ] 00:33:55.342 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.342 [2024-07-15 16:32:38.182788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.342 [2024-07-15 16:32:38.273357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.599 Running I/O for 1 seconds... 00:33:56.531 00:33:56.531 Latency(us) 00:33:56.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.531 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.531 Verification LBA range: start 0x0 length 0x4000 00:33:56.531 Nvme1n1 : 1.01 8931.37 34.89 0.00 0.00 14276.04 2767.08 16019.91 00:33:56.531 =================================================================================================================== 00:33:56.531 Total : 8931.37 34.89 0.00 0.00 14276.04 2767.08 16019.91 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=475506 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:56.790 { 00:33:56.790 "params": { 00:33:56.790 "name": "Nvme$subsystem", 00:33:56.790 "trtype": "$TEST_TRANSPORT", 00:33:56.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.790 "adrfam": "ipv4", 00:33:56.790 "trsvcid": "$NVMF_PORT", 00:33:56.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.790 "hdgst": ${hdgst:-false}, 00:33:56.790 "ddgst": ${ddgst:-false} 00:33:56.790 }, 00:33:56.790 "method": "bdev_nvme_attach_controller" 00:33:56.790 } 00:33:56.790 EOF 00:33:56.790 )") 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:56.790 16:32:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:56.790 "params": { 00:33:56.790 "name": "Nvme1", 00:33:56.790 "trtype": "tcp", 00:33:56.790 "traddr": "10.0.0.2", 00:33:56.790 "adrfam": "ipv4", 00:33:56.790 "trsvcid": "4420", 00:33:56.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.790 "hdgst": false, 00:33:56.790 "ddgst": false 00:33:56.790 }, 00:33:56.790 "method": "bdev_nvme_attach_controller" 00:33:56.790 }' 00:33:56.790 [2024-07-15 16:32:39.726564] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:56.790 [2024-07-15 16:32:39.726638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475506 ] 00:33:56.790 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.049 [2024-07-15 16:32:39.787373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.049 [2024-07-15 16:32:39.875797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.307 Running I/O for 15 seconds... 00:33:59.839 16:32:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 475221 00:33:59.839 16:32:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:59.839 [2024-07-15 16:32:42.695840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.695896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.695927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.695951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.695970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.695984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-07-15 16:32:42.696855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.839 [2024-07-15 16:32:42.696869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.696883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.696898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.696911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.696939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.696954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.696968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.696983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.696997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-07-15 16:32:42.697203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.697984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.697998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.840 [2024-07-15 16:32:42.698282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.840 [2024-07-15 16:32:42.698299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.698986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.698999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.841 [2024-07-15 16:32:42.699696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.841 [2024-07-15 16:32:42.699713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.699973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.699986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.700014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.700060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.842 [2024-07-15 16:32:42.700092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.842 [2024-07-15 16:32:42.700124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144e670 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.700158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:59.842 [2024-07-15 16:32:42.700175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:59.842 [2024-07-15 16:32:42.700188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59544 len:8 PRP1 0x0 PRP2 0x0 00:33:59.842 [2024-07-15 16:32:42.700203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700269] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x144e670 was disconnected and freed. reset controller. 00:33:59.842 [2024-07-15 16:32:42.700344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.842 [2024-07-15 16:32:42.700367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.842 [2024-07-15 16:32:42.700399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.842 [2024-07-15 16:32:42.700429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.842 [2024-07-15 16:32:42.700458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.842 [2024-07-15 16:32:42.700472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.704257] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.704303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.705038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.705071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.705090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.705331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.705575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.705599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.705630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.709235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.842 [2024-07-15 16:32:42.718560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.719034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.719072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.719091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.719330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.719580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.719604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.719620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.723220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.842 [2024-07-15 16:32:42.732561] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.733064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.733096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.733117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.733357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.733600] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.733623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.733638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.737241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.842 [2024-07-15 16:32:42.746554] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.747081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.747113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.747131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.747370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.747613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.747636] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.747652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.751245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.842 [2024-07-15 16:32:42.760557] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.760935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.760967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.760986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.761224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.761467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.761490] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.761505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.765095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.842 [2024-07-15 16:32:42.774334] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.842 [2024-07-15 16:32:42.774840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.842 [2024-07-15 16:32:42.774872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.842 [2024-07-15 16:32:42.774890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.842 [2024-07-15 16:32:42.775128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.842 [2024-07-15 16:32:42.775372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.842 [2024-07-15 16:32:42.775395] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.842 [2024-07-15 16:32:42.775410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.842 [2024-07-15 16:32:42.778958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.843 [2024-07-15 16:32:42.788272] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.843 [2024-07-15 16:32:42.788742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.843 [2024-07-15 16:32:42.788783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.843 [2024-07-15 16:32:42.788815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.843 [2024-07-15 16:32:42.789038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.843 [2024-07-15 16:32:42.789289] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.843 [2024-07-15 16:32:42.789312] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.843 [2024-07-15 16:32:42.789327] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.843 [2024-07-15 16:32:42.792884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.843 [2024-07-15 16:32:42.802173] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.843 [2024-07-15 16:32:42.802642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.843 [2024-07-15 16:32:42.802673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:33:59.843 [2024-07-15 16:32:42.802690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:33:59.843 [2024-07-15 16:32:42.802939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:33:59.843 [2024-07-15 16:32:42.803183] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.843 [2024-07-15 16:32:42.803207] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.843 [2024-07-15 16:32:42.803222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.843 [2024-07-15 16:32:42.806810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.816123] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.816557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.816588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.816611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.816863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.817107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.817130] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.817146] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.820723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.830043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.830523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.830554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.830573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.830823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.831067] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.831090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.831105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.834683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.844006] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.844480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.844511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.844528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.844779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.845023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.845046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.845061] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.848640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.857972] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.858466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.858502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.858520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.858770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.859014] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.859043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.859059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.862640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.871958] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.872407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.872425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.872664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.872916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.872940] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.872956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.876539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.885865] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.886357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.886388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.886406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.886645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.886897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.105 [2024-07-15 16:32:42.886921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.105 [2024-07-15 16:32:42.886937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.105 [2024-07-15 16:32:42.890520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.105 [2024-07-15 16:32:42.899826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.105 [2024-07-15 16:32:42.900260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.105 [2024-07-15 16:32:42.900290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.105 [2024-07-15 16:32:42.900308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.105 [2024-07-15 16:32:42.900546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.105 [2024-07-15 16:32:42.900801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.900825] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.900840] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.904419] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.913722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.914219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.914250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.914267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.914506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.914759] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.914793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.914807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.918385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.927680] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.928171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.928201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.928219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.928457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.928700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.928724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.928749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.932337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.941639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.942147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.942177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.942194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.942433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.942675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.942698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.942713] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.946300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.955606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.956039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.956070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.956088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.956333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.956576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.956599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.956614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.959927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.969215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.969673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.969698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.969728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.969976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.970208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.970228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.970242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.973424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.982719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.983205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.983233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.983262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.983458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.983656] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.983675] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.983688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.986676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:42.995985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:42.996425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:42.996464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:42.996479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:42.996674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:42.996906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:42.996927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:42.996945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:42.999933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:43.009236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:43.009647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:43.009671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:43.009685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:43.009925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:43.010145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:43.010164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:43.010177] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:43.013205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:43.022490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:43.022959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:43.022985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:43.023014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:43.023226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:43.023425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:43.023444] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:43.023457] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:43.026447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:43.035753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:43.036225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:43.036273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:43.036288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:43.036483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:43.036682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.106 [2024-07-15 16:32:43.036702] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.106 [2024-07-15 16:32:43.036714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.106 [2024-07-15 16:32:43.039704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.106 [2024-07-15 16:32:43.049072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.106 [2024-07-15 16:32:43.049490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.106 [2024-07-15 16:32:43.049538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.106 [2024-07-15 16:32:43.049554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.106 [2024-07-15 16:32:43.049776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.106 [2024-07-15 16:32:43.049982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.107 [2024-07-15 16:32:43.050002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.107 [2024-07-15 16:32:43.050030] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.107 [2024-07-15 16:32:43.053006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.107 [2024-07-15 16:32:43.062349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.107 [2024-07-15 16:32:43.062776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.107 [2024-07-15 16:32:43.062817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.107 [2024-07-15 16:32:43.062832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.107 [2024-07-15 16:32:43.063061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.107 [2024-07-15 16:32:43.063260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.107 [2024-07-15 16:32:43.063279] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.107 [2024-07-15 16:32:43.063292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.107 [2024-07-15 16:32:43.066281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.107 [2024-07-15 16:32:43.075567] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.107 [2024-07-15 16:32:43.076044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.107 [2024-07-15 16:32:43.076068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.107 [2024-07-15 16:32:43.076082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.107 [2024-07-15 16:32:43.076293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.107 [2024-07-15 16:32:43.076492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.107 [2024-07-15 16:32:43.076511] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.107 [2024-07-15 16:32:43.076523] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.107 [2024-07-15 16:32:43.079679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.088980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.089427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.089451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.089479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.089675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.089909] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.368 [2024-07-15 16:32:43.089945] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.368 [2024-07-15 16:32:43.089958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.368 [2024-07-15 16:32:43.093007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.102299] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.102716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.102745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.102776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.102978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.103194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.368 [2024-07-15 16:32:43.103214] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.368 [2024-07-15 16:32:43.103227] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.368 [2024-07-15 16:32:43.106215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.115529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.116016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.116060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.116074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.116283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.116482] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.368 [2024-07-15 16:32:43.116500] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.368 [2024-07-15 16:32:43.116513] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.368 [2024-07-15 16:32:43.119498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.128845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.129315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.129339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.129369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.129564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.129791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.368 [2024-07-15 16:32:43.129811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.368 [2024-07-15 16:32:43.129824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.368 [2024-07-15 16:32:43.132813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.142117] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.142505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.142544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.142558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.142794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.143000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.368 [2024-07-15 16:32:43.143034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.368 [2024-07-15 16:32:43.143047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.368 [2024-07-15 16:32:43.146020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.368 [2024-07-15 16:32:43.155313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.368 [2024-07-15 16:32:43.155744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.368 [2024-07-15 16:32:43.155784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.368 [2024-07-15 16:32:43.155799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.368 [2024-07-15 16:32:43.156015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.368 [2024-07-15 16:32:43.156232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.156252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.156264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.159253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.168616] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.169116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.169156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.169171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.169366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.169565] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.169584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.169597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.172581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.181880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.182362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.182400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.182419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.182616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.182843] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.182863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.182876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.185862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.195154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.195582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.195606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.195634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.195858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.196078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.196097] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.196110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.199097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.208624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.209124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.209149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.209164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.209392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.209620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.209640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.209653] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.212947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.221920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.222445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.222469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.222499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.222694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.222922] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.222951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.222965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.225951] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.235242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.235732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.235763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.235778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.235995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.236210] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.236229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.236242] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.239237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.248523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.248969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.249009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.249025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.249236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.249435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.249454] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.249466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.252452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.261791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.262265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.262304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.262319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.262514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.262713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.262755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.262769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.265756] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.275105] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.275527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.275552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.275581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.275806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.276012] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.276047] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.276061] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.279046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.288325] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.288796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.288835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.288850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.289046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.289245] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.369 [2024-07-15 16:32:43.289264] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.369 [2024-07-15 16:32:43.289277] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.369 [2024-07-15 16:32:43.292264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.369 [2024-07-15 16:32:43.301544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.369 [2024-07-15 16:32:43.302046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.369 [2024-07-15 16:32:43.302071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.369 [2024-07-15 16:32:43.302100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.369 [2024-07-15 16:32:43.302296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.369 [2024-07-15 16:32:43.302495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.370 [2024-07-15 16:32:43.302514] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.370 [2024-07-15 16:32:43.302526] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.370 [2024-07-15 16:32:43.305514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.370 [2024-07-15 16:32:43.314857] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.370 [2024-07-15 16:32:43.315326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.370 [2024-07-15 16:32:43.315365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.370 [2024-07-15 16:32:43.315384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.370 [2024-07-15 16:32:43.315581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.370 [2024-07-15 16:32:43.315808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.370 [2024-07-15 16:32:43.315829] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.370 [2024-07-15 16:32:43.315842] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.370 [2024-07-15 16:32:43.318830] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.370 [2024-07-15 16:32:43.328130] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.370 [2024-07-15 16:32:43.328585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.370 [2024-07-15 16:32:43.328624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.370 [2024-07-15 16:32:43.328639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.370 [2024-07-15 16:32:43.328863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.370 [2024-07-15 16:32:43.329084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.370 [2024-07-15 16:32:43.329103] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.370 [2024-07-15 16:32:43.329115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.370 [2024-07-15 16:32:43.332101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.370 [2024-07-15 16:32:43.341423] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.370 [2024-07-15 16:32:43.341872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.370 [2024-07-15 16:32:43.341897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.370 [2024-07-15 16:32:43.341925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.370 [2024-07-15 16:32:43.342139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.370 [2024-07-15 16:32:43.342338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.370 [2024-07-15 16:32:43.342357] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.370 [2024-07-15 16:32:43.342370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.370 [2024-07-15 16:32:43.345607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.629 [2024-07-15 16:32:43.354765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.629 [2024-07-15 16:32:43.355239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.629 [2024-07-15 16:32:43.355278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.629 [2024-07-15 16:32:43.355293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.629 [2024-07-15 16:32:43.355488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.629 [2024-07-15 16:32:43.355687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.629 [2024-07-15 16:32:43.355710] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.629 [2024-07-15 16:32:43.355747] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.629 [2024-07-15 16:32:43.358717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.629 [2024-07-15 16:32:43.368076] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.629 [2024-07-15 16:32:43.368543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.629 [2024-07-15 16:32:43.368582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.629 [2024-07-15 16:32:43.368597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.629 [2024-07-15 16:32:43.368821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.629 [2024-07-15 16:32:43.369027] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.629 [2024-07-15 16:32:43.369060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.629 [2024-07-15 16:32:43.369073] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.629 [2024-07-15 16:32:43.372060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.381345] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.381812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.381838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.381878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.382095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.382295] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.382314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.382326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.385312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.394599] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.395046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.395085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.395100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.395309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.395509] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.395528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.395540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.398643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.407814] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.408267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.408306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.408321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.408516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.408715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.408757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.408772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.411797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.421092] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.421550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.421574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.421603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.421827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.422032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.422066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.422078] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.425064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.434342] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.434768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.434808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.434823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.435037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.435254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.435273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.435285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.438273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.447550] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.448002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.448027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.448042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.448241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.448440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.448459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.448472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.451457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.461130] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.461567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.461592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.461607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.461852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.462078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.462098] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.462111] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.465236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.474398] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.474852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.474892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.474908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.475121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.475320] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.475339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.475352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.478339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.487629] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.488048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.488074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.488088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.488284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.488483] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.488502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.488519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.491510] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.500998] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.501444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.501484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.501500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.501694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.501923] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.501943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.501956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.504943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.630 [2024-07-15 16:32:43.514275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.630 [2024-07-15 16:32:43.514693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.630 [2024-07-15 16:32:43.514717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.630 [2024-07-15 16:32:43.514752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.630 [2024-07-15 16:32:43.514954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.630 [2024-07-15 16:32:43.515169] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.630 [2024-07-15 16:32:43.515189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.630 [2024-07-15 16:32:43.515201] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.630 [2024-07-15 16:32:43.518184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.527476] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.527927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.527953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.527984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.528196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.528395] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.528414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.528427] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.531415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.540698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.541202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.541246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.541262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.541458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.541657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.541676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.541688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.544678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.553989] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.554470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.554494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.554523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.554733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.554949] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.554969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.554982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.557964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.567301] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.567719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.567750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.567781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.567982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.568199] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.568218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.568231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.571218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.580493] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.580997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.581036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.581051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.581259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.581463] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.581482] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.581495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.584482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.593787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.594276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.594315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.594330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.594526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.594724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.594766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.594781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.631 [2024-07-15 16:32:43.597767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.631 [2024-07-15 16:32:43.607413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.631 [2024-07-15 16:32:43.607908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.631 [2024-07-15 16:32:43.607936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.631 [2024-07-15 16:32:43.607967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.631 [2024-07-15 16:32:43.608196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.631 [2024-07-15 16:32:43.608429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.631 [2024-07-15 16:32:43.608451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.631 [2024-07-15 16:32:43.608464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.890 [2024-07-15 16:32:43.611619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.890 [2024-07-15 16:32:43.620615] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.890 [2024-07-15 16:32:43.620990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.890 [2024-07-15 16:32:43.621017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.890 [2024-07-15 16:32:43.621032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.621244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.621443] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.621462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.621474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.624472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.633850] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.634239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.634279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.634293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.634503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.634702] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.634721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.634734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.637724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.647407] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.647818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.647843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.647857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.648067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.648267] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.648285] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.648298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.651329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.660641] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.661065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.661091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.661105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.661315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.661520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.661540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.661553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.664556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.673875] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.674252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.674291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.674310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.674521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.674734] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.674762] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.674775] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.677766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.687083] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.687470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.687509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.687523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.687756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.687962] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.687982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.687994] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.690981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.700298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.700684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.700709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.700745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.700950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.701166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.701187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.701201] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.704261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.713835] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.714267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.714291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.714320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.714516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.714715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.714760] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.714775] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.717877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.727179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.727614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.727660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.727675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.727904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.728152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.728171] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.728184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.731237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.740646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.741014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.741060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.741075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.741286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.741486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.741506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.741518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.744565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.754060] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.754404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.891 [2024-07-15 16:32:43.754430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.891 [2024-07-15 16:32:43.754445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.891 [2024-07-15 16:32:43.754640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.891 [2024-07-15 16:32:43.754874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.891 [2024-07-15 16:32:43.754895] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.891 [2024-07-15 16:32:43.754909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.891 [2024-07-15 16:32:43.757959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.891 [2024-07-15 16:32:43.767340] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.891 [2024-07-15 16:32:43.767755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.767782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.767798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.768006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.768224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.768244] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.768257] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.771317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.780612] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.780985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.781025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.781040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.781236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.781435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.781454] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.781466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.784457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.793950] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.794412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.794437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.794465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.794660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.794888] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.794908] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.794921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.797919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.807248] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.807674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.807699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.807727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.807959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.808177] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.808196] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.808209] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.811200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.820546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.820926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.820966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.820981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.821208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.821407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.821427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.821440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.824432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.833854] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.834235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.834275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.834290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.834518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.834717] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.834744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.834774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.837888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.847065] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.847439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.847464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.847479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.847675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.847902] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.847923] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.847941] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.850932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:00.892 [2024-07-15 16:32:43.860404] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.892 [2024-07-15 16:32:43.860813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.892 [2024-07-15 16:32:43.860839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:00.892 [2024-07-15 16:32:43.860854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:00.892 [2024-07-15 16:32:43.861084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:00.892 [2024-07-15 16:32:43.861283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:00.892 [2024-07-15 16:32:43.861302] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:00.892 [2024-07-15 16:32:43.861314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.892 [2024-07-15 16:32:43.864342] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.152 [2024-07-15 16:32:43.873834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.152 [2024-07-15 16:32:43.874282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.152 [2024-07-15 16:32:43.874308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.874324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.874532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.874777] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.874798] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.874811] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.877870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.887322] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.887831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.887858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.887889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.888104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.888322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.888343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.888356] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.891412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.900547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.900932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.900972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.900987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.901214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.901412] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.901431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.901444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.904440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.913792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.914282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.914306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.914336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.914531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.914756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.914777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.914790] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.917781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.927087] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.927548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.927581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.927610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.927835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.928055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.928075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.928087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.931075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.940316] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.940743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.940781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.940796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.941016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.941234] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.941253] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.941265] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.944295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.953534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.954024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.954062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.954077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.954286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.954485] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.954504] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.954516] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.957585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.966966] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.967458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.967484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.967514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.967746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.968004] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.968025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.968037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.971109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.980201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.980629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.980653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.980682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.980908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.981127] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.981146] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.981164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.984156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:43.994021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:43.994532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:43.994586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:43.994604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:43.994853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:43.995097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:43.995120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:43.995136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:43.998713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:44.008016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:44.008503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.153 [2024-07-15 16:32:44.008534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.153 [2024-07-15 16:32:44.008552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.153 [2024-07-15 16:32:44.008803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.153 [2024-07-15 16:32:44.009047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.153 [2024-07-15 16:32:44.009071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.153 [2024-07-15 16:32:44.009086] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.153 [2024-07-15 16:32:44.012669] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.153 [2024-07-15 16:32:44.021981] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.153 [2024-07-15 16:32:44.022477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.022508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.022525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.022776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.023020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.023043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.023058] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.026637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.035949] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.036459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.036517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.036536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.036791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.037036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.037059] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.037074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.040654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.049973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.050460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.050490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.050508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.050757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.051001] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.051025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.051040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.054621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.063935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.064456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.064503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.064521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.064772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.065016] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.065039] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.065054] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.068636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.077947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.078406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.078437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.078454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.078693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.078953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.078977] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.078992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.082577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.091888] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.092376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.092406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.092424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.092663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.092917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.092941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.092956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.096541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.105866] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.106374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.106405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.106423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.106662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.106917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.106941] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.106956] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.110538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.154 [2024-07-15 16:32:44.119851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.154 [2024-07-15 16:32:44.120359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.154 [2024-07-15 16:32:44.120409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.154 [2024-07-15 16:32:44.120427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.154 [2024-07-15 16:32:44.120665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.154 [2024-07-15 16:32:44.120920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.154 [2024-07-15 16:32:44.120944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.154 [2024-07-15 16:32:44.120959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.154 [2024-07-15 16:32:44.124545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.133864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.134375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.134427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.134445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.134684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.134938] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.414 [2024-07-15 16:32:44.134962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.414 [2024-07-15 16:32:44.134978] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.414 [2024-07-15 16:32:44.138571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.147891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.148349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.148379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.148397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.148636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.148892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.414 [2024-07-15 16:32:44.148916] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.414 [2024-07-15 16:32:44.148931] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.414 [2024-07-15 16:32:44.152511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.161822] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.162329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.162385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.162403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.162641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.162897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.414 [2024-07-15 16:32:44.162921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.414 [2024-07-15 16:32:44.162936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.414 [2024-07-15 16:32:44.166516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.175833] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.176347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.176395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.176419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.176659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.176914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.414 [2024-07-15 16:32:44.176938] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.414 [2024-07-15 16:32:44.176953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.414 [2024-07-15 16:32:44.180538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.189851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.190320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.190371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.190389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.190628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.190883] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.414 [2024-07-15 16:32:44.190907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.414 [2024-07-15 16:32:44.190921] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.414 [2024-07-15 16:32:44.194505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.414 [2024-07-15 16:32:44.203824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.414 [2024-07-15 16:32:44.204278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.414 [2024-07-15 16:32:44.204327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.414 [2024-07-15 16:32:44.204345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.414 [2024-07-15 16:32:44.204584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.414 [2024-07-15 16:32:44.204841] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.204866] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.204881] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.208465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.217787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.218298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.218348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.218366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.218604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.218858] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.218887] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.218902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.222484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.231794] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.232303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.232353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.232370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.232608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.232863] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.232888] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.232903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.236480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.245800] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.246311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.246360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.246378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.246616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.246872] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.246896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.246912] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.250488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.259796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.260299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.260350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.260368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.260606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.260862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.260886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.260901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.264478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.273792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.274303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.274355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.274372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.274610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.274866] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.274890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.274905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.278480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.287784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.288289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.288320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.288337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.288576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.288832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.288856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.288872] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.292448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.301752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.302212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.302265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.302283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.302521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.302775] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.302799] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.302814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.306393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.315696] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.316211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.316262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.316280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.316524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.316780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.316804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.316819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.320395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.329695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.330192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.330238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.330256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.330494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.330748] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.330771] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.330787] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.334362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.343668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.344152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.344205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.344223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.344461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.344704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.415 [2024-07-15 16:32:44.344727] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.415 [2024-07-15 16:32:44.344753] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.415 [2024-07-15 16:32:44.348334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.415 [2024-07-15 16:32:44.357623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.415 [2024-07-15 16:32:44.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.415 [2024-07-15 16:32:44.358183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.415 [2024-07-15 16:32:44.358201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.415 [2024-07-15 16:32:44.358439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.415 [2024-07-15 16:32:44.358682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.416 [2024-07-15 16:32:44.358705] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.416 [2024-07-15 16:32:44.358725] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.416 [2024-07-15 16:32:44.362311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.416 [2024-07-15 16:32:44.371610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.416 [2024-07-15 16:32:44.372123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.416 [2024-07-15 16:32:44.372171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.416 [2024-07-15 16:32:44.372189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.416 [2024-07-15 16:32:44.372427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.416 [2024-07-15 16:32:44.372670] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.416 [2024-07-15 16:32:44.372693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.416 [2024-07-15 16:32:44.372708] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.416 [2024-07-15 16:32:44.376301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.416 [2024-07-15 16:32:44.385608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.416 [2024-07-15 16:32:44.386135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.416 [2024-07-15 16:32:44.386189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.416 [2024-07-15 16:32:44.386207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.416 [2024-07-15 16:32:44.386445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.416 [2024-07-15 16:32:44.386688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.416 [2024-07-15 16:32:44.386712] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.416 [2024-07-15 16:32:44.386727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.416 [2024-07-15 16:32:44.390318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.399630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.400102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.676 [2024-07-15 16:32:44.400153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.676 [2024-07-15 16:32:44.400171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.676 [2024-07-15 16:32:44.400410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.676 [2024-07-15 16:32:44.400654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.676 [2024-07-15 16:32:44.400677] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.676 [2024-07-15 16:32:44.400692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.676 [2024-07-15 16:32:44.404283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.413584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.414117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.676 [2024-07-15 16:32:44.414166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.676 [2024-07-15 16:32:44.414184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.676 [2024-07-15 16:32:44.414423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.676 [2024-07-15 16:32:44.414666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.676 [2024-07-15 16:32:44.414690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.676 [2024-07-15 16:32:44.414705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.676 [2024-07-15 16:32:44.418296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.427514] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.428025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.676 [2024-07-15 16:32:44.428076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.676 [2024-07-15 16:32:44.428094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.676 [2024-07-15 16:32:44.428333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.676 [2024-07-15 16:32:44.428576] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.676 [2024-07-15 16:32:44.428599] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.676 [2024-07-15 16:32:44.428614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.676 [2024-07-15 16:32:44.432204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.441506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.442034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.676 [2024-07-15 16:32:44.442083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.676 [2024-07-15 16:32:44.442101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.676 [2024-07-15 16:32:44.442339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.676 [2024-07-15 16:32:44.442582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.676 [2024-07-15 16:32:44.442605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.676 [2024-07-15 16:32:44.442620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.676 [2024-07-15 16:32:44.446210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.455515] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.456037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.676 [2024-07-15 16:32:44.456087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.676 [2024-07-15 16:32:44.456105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.676 [2024-07-15 16:32:44.456344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.676 [2024-07-15 16:32:44.456592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.676 [2024-07-15 16:32:44.456616] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.676 [2024-07-15 16:32:44.456631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.676 [2024-07-15 16:32:44.460228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.676 [2024-07-15 16:32:44.469543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.676 [2024-07-15 16:32:44.470073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.470124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.470142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.470380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.470623] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.470646] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.470661] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.474254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.483569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.484004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.484035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.484053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.484292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.484535] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.484558] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.484573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.488165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.497480] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.497891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.497922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.497940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.498179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.498423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.498446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.498461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.502056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.511369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.511852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.511904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.511922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.512161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.512404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.512427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.512442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.516046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.525350] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.525851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.525882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.525900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.526139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.526382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.526405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.526420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.530010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.539315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.539815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.539846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.539864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.540103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.540346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.540369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.540384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.543980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.553301] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.553798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.553834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.553853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.554092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.554335] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.554358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.554374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.557967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.567280] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.567784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.567816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.567834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.568073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.568317] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.568340] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.568355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.571950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.581262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.581716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.581754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.581774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.582013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.582256] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.582280] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.582295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.585888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.595191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.595674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.595705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.595723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.595972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.596225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.596249] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.596264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.599855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.609154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.677 [2024-07-15 16:32:44.609666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.677 [2024-07-15 16:32:44.609697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.677 [2024-07-15 16:32:44.609715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.677 [2024-07-15 16:32:44.609965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.677 [2024-07-15 16:32:44.610209] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.677 [2024-07-15 16:32:44.610232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.677 [2024-07-15 16:32:44.610248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.677 [2024-07-15 16:32:44.613835] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.677 [2024-07-15 16:32:44.623151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.678 [2024-07-15 16:32:44.623662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.678 [2024-07-15 16:32:44.623693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.678 [2024-07-15 16:32:44.623711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.678 [2024-07-15 16:32:44.623961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.678 [2024-07-15 16:32:44.624205] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.678 [2024-07-15 16:32:44.624228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.678 [2024-07-15 16:32:44.624243] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.678 [2024-07-15 16:32:44.627830] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.678 [2024-07-15 16:32:44.637132] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.678 [2024-07-15 16:32:44.637591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.678 [2024-07-15 16:32:44.637621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.678 [2024-07-15 16:32:44.637639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.678 [2024-07-15 16:32:44.637890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.678 [2024-07-15 16:32:44.638134] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.678 [2024-07-15 16:32:44.638157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.678 [2024-07-15 16:32:44.638172] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.678 [2024-07-15 16:32:44.641757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.678 [2024-07-15 16:32:44.651069] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.678 [2024-07-15 16:32:44.651565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.678 [2024-07-15 16:32:44.651595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.678 [2024-07-15 16:32:44.651613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.678 [2024-07-15 16:32:44.651865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.678 [2024-07-15 16:32:44.652109] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.678 [2024-07-15 16:32:44.652132] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.678 [2024-07-15 16:32:44.652147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.655732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.665044] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.665498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.665528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.665546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.665797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.666041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.666065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.666080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.669659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.678967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.679473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.679520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.679538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.679790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.680034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.680057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.680072] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.683652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.692957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.693459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.693510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.693533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.693786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.694030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.694053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.694068] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.697647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.706955] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.707469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.707517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.707534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.707785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.708030] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.708053] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.708068] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.711653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.720972] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.721401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.721432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.721450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.721689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.721942] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.721966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.721981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.725562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.734873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.735383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.735414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.735432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.735671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.735926] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.735955] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.735972] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.739557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.748875] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.749377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.749407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.749425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.749664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.749919] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.749943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.749958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.753535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.762839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.763335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.763365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.763383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.763622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.763877] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.763901] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.763916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.767500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.776826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.777214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.777245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.938 [2024-07-15 16:32:44.777263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.938 [2024-07-15 16:32:44.777501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.938 [2024-07-15 16:32:44.777756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.938 [2024-07-15 16:32:44.777780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.938 [2024-07-15 16:32:44.777795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.938 [2024-07-15 16:32:44.781375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.938 [2024-07-15 16:32:44.790708] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.938 [2024-07-15 16:32:44.791145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.938 [2024-07-15 16:32:44.791177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.791195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.791432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.791675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.791698] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.791714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.795299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.804625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.805010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.805041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.805059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.805297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.805540] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.805562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.805578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.809187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.818508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.818893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.818924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.818942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.819180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.819424] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.819447] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.819462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.823050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.832361] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.832752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.832783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.832801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.833044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.833287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.833311] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.833326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.836929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.846249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.846662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.846692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.846710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.846958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.847202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.847225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.847240] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.850838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.860141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.860555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.860607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.860625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.860872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.861116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.861140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.861155] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.864734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.874051] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.874491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.874542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.874570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.874818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.875061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.875085] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.875106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.878690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.888004] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.888442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.888493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.888511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.888758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.889002] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.889025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.889040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.892624] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.901937] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.902330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.902361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.902379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.902617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:01.939 [2024-07-15 16:32:44.902871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.939 [2024-07-15 16:32:44.902895] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.939 [2024-07-15 16:32:44.902910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.939 [2024-07-15 16:32:44.906487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.939 [2024-07-15 16:32:44.915799] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.939 [2024-07-15 16:32:44.916216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.939 [2024-07-15 16:32:44.916247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:01.939 [2024-07-15 16:32:44.916264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:01.939 [2024-07-15 16:32:44.916503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.916756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.916781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.916797] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.920374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.929676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.930154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.930189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.930208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:44.930447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.930690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.930713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.930728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.934324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.943636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.944164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.944196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.944214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:44.944454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.944697] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.944720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.944735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.948327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.957656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.958143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.958175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.958193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:44.958431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.958675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.958699] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.958714] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.962303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.971623] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.972052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.972103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.972121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:44.972360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.972609] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.972633] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.972649] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.976241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.985558] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.985948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.985978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.985996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:44.986234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:44.986478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:44.986501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:44.986516] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:44.990108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:44.999436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:44.999817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:44.999849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:44.999867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:45.000106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:45.000349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:45.000373] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:45.000388] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:45.003978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:45.013304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:45.013707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:45.013746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:45.013766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:45.014005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:45.014249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:45.014272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:45.014289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:45.017884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:45.027203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:45.027621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:45.027652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:45.027670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:45.027919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:45.028163] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:45.028187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:45.028202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:45.031807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:45.041139] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:45.041607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:45.041659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:45.041677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.199 [2024-07-15 16:32:45.041926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.199 [2024-07-15 16:32:45.042170] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.199 [2024-07-15 16:32:45.042193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.199 [2024-07-15 16:32:45.042208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.199 [2024-07-15 16:32:45.045802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.199 [2024-07-15 16:32:45.055113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.199 [2024-07-15 16:32:45.055528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.199 [2024-07-15 16:32:45.055577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.199 [2024-07-15 16:32:45.055595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.055844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.056088] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.056111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.056127] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.059706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.069023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.069452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.069503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.069528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.069778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.070022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.070046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.070061] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.073655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.082977] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.083369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.083400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.083417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.083656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.083913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.083937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.083952] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.087534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.096852] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.097261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.097292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.097310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.097548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.097804] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.097828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.097843] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.101424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.110726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.111135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.111165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.111182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.111420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.111663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.111691] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.111707] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.115299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.124605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.125020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.125051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.125069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.125308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.125550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.125574] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.125589] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.129183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.138495] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.138924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.138955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.138973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.139212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.139455] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.139478] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.139493] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.143083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.152385] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.152774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.152806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.152824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.153063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.153307] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.153330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.153345] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.156936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.200 [2024-07-15 16:32:45.166242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.200 [2024-07-15 16:32:45.166670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.200 [2024-07-15 16:32:45.166718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.200 [2024-07-15 16:32:45.166735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.200 [2024-07-15 16:32:45.166985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.200 [2024-07-15 16:32:45.167228] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.200 [2024-07-15 16:32:45.167252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.200 [2024-07-15 16:32:45.167267] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.200 [2024-07-15 16:32:45.170851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.459 [2024-07-15 16:32:45.180150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.459 [2024-07-15 16:32:45.180557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.180605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.180623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.180874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.181117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.181141] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.181156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.184733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.194032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.194412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.194442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.194460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.194698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.194951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.194975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.194990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.198564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.208077] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.208485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.208515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.208538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.208790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.209034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.209057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.209072] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.212652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.221967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.222401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.222453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.222471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.222710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.222961] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.222986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.223001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.226580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.235883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.236315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.236363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.236381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.236619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.236876] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.236900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.236916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.240500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.249818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.250246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.250297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.250316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.250554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.250810] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.250838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.250854] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.254434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.263759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.264167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.264198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.264216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.264454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.264698] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.264721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.264746] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.268324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.277614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.278027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.278059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.278076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.278315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.278559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.278582] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.278597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.282188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.291490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.291922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.291952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.291970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.292208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.292452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.292474] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.292490] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.296081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.305379] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.305813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.305844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.305862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.306101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.306344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.306367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.306382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.309975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.319278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.319678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.460 [2024-07-15 16:32:45.319709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.460 [2024-07-15 16:32:45.319727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.460 [2024-07-15 16:32:45.319975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.460 [2024-07-15 16:32:45.320218] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.460 [2024-07-15 16:32:45.320242] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.460 [2024-07-15 16:32:45.320257] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.460 [2024-07-15 16:32:45.323845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.460 [2024-07-15 16:32:45.333148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.460 [2024-07-15 16:32:45.333564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.333595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.333612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.333862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.334106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.334129] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.334144] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.337727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.347035] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.347451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.347481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.347499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.347754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.347997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.348021] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.348036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.351616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.360927] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.361318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.361348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.361365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.361604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.361857] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.361881] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.361896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.365473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.374778] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.375186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.375217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.375235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.375474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.375718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.375750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.375767] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.379351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.388673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.389089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.389120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.389138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.389377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.389620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.389643] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.389664] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.393256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.402555] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.402970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.403001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.403019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.403258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.403501] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.403524] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.403539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.407128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.416429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.416833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.416864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.416882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.417121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.417364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.417387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.417403] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.420990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.461 [2024-07-15 16:32:45.430289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.461 [2024-07-15 16:32:45.430671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.461 [2024-07-15 16:32:45.430720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.461 [2024-07-15 16:32:45.430747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.461 [2024-07-15 16:32:45.430989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.461 [2024-07-15 16:32:45.431232] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.461 [2024-07-15 16:32:45.431255] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.461 [2024-07-15 16:32:45.431269] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.461 [2024-07-15 16:32:45.434853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.444161] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.444549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.444605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.444624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.444875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.445119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.445142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.445157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.448903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.458200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.458605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.458636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.458654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.458904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.459148] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.459171] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.459187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.462775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.472070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.472497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.472527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.472545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.472794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.473038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.473062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.473077] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.476655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.485965] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.486355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.486386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.486403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.486642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.486904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.486928] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.486944] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.490527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.499846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.500255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.500286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.500303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.500542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.500795] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.500819] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.500834] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.504410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.513712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.514132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.514163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.514180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.514419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.514662] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.514685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.514700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.518289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.527592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.527985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.528016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.528033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.528272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.528514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.528538] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.528553] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.532147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.541464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.541847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.541878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.541895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.542134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.542378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.542401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.542416] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.546010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.555312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.555723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.555760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.720 [2024-07-15 16:32:45.555779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.720 [2024-07-15 16:32:45.556018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.720 [2024-07-15 16:32:45.556261] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.720 [2024-07-15 16:32:45.556284] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.720 [2024-07-15 16:32:45.556299] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.720 [2024-07-15 16:32:45.559889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.720 [2024-07-15 16:32:45.569188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.720 [2024-07-15 16:32:45.569603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.720 [2024-07-15 16:32:45.569633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.569651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.569901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.570145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.570168] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.570183] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.573771] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.583091] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.583469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.583500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.583523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.583773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.584017] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.584040] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.584055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.587634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.596947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.597351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.597381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.597399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.597638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.597891] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.597915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.597930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.601508] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.610820] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.611248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.611298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.611316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.611555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.611808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.611832] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.611847] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.615425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.624727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.625132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.625181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.625198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.625437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.625680] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.625708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.625723] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.629313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.638625] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.639013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.639043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.639061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.639299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.639543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.639566] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.639581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.643169] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.652474] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.652896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.652926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.652943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.653182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.653425] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.653448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.653464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.657052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.666363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.666791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.666823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.666840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.667079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.667322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.667346] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.667361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.670952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 [2024-07-15 16:32:45.680249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 [2024-07-15 16:32:45.680639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.680670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.680688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.680938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.681181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.681204] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.681219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.684805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 475221 Killed "${NVMF_APP[@]}" "$@" 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=476171 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 476171 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 476171 ']' 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.721 [2024-07-15 16:32:45.694103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:02.721 16:32:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:02.721 [2024-07-15 16:32:45.694479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.721 [2024-07-15 16:32:45.694510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.721 [2024-07-15 16:32:45.694528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.721 [2024-07-15 16:32:45.694776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.721 [2024-07-15 16:32:45.695020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.721 [2024-07-15 16:32:45.695044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.721 [2024-07-15 16:32:45.695059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.721 [2024-07-15 16:32:45.698638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.980 [2024-07-15 16:32:45.708151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.980 [2024-07-15 16:32:45.708565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.980 [2024-07-15 16:32:45.708596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.980 [2024-07-15 16:32:45.708614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.980 [2024-07-15 16:32:45.708862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.980 [2024-07-15 16:32:45.709106] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.980 [2024-07-15 16:32:45.709129] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.980 [2024-07-15 16:32:45.709145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.980 [2024-07-15 16:32:45.712727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.980 [2024-07-15 16:32:45.722032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.980 [2024-07-15 16:32:45.722418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.980 [2024-07-15 16:32:45.722448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.980 [2024-07-15 16:32:45.722466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.980 [2024-07-15 16:32:45.722704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.980 [2024-07-15 16:32:45.722961] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.980 [2024-07-15 16:32:45.722985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.980 [2024-07-15 16:32:45.723000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.980 [2024-07-15 16:32:45.726575] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.980 [2024-07-15 16:32:45.735906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.980 [2024-07-15 16:32:45.736304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.980 [2024-07-15 16:32:45.736335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.980 [2024-07-15 16:32:45.736354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.980 [2024-07-15 16:32:45.736603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.980 [2024-07-15 16:32:45.736862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.980 [2024-07-15 16:32:45.736887] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.980 [2024-07-15 16:32:45.736903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.980 [2024-07-15 16:32:45.740478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.980 [2024-07-15 16:32:45.741937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:02.980 [2024-07-15 16:32:45.742006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.980 [2024-07-15 16:32:45.749781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.980 [2024-07-15 16:32:45.750234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.980 [2024-07-15 16:32:45.750265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.980 [2024-07-15 16:32:45.750283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.980 [2024-07-15 16:32:45.750522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.980 [2024-07-15 16:32:45.750774] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.980 [2024-07-15 16:32:45.750798] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.980 [2024-07-15 16:32:45.750814] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.980 [2024-07-15 16:32:45.754392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.980 [2024-07-15 16:32:45.763859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.980 [2024-07-15 16:32:45.764290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.980 [2024-07-15 16:32:45.764321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.980 [2024-07-15 16:32:45.764339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.764577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.764833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.764857] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.764872] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.768451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.777762] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.778184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.778215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.778232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.778479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.778722] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.778757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.778773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.782352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.981 [2024-07-15 16:32:45.791664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.792106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.792137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.792155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.792399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.792642] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.792665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.792681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.796267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.805573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.806006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.806037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.806055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.806294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.806537] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.806560] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.806575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.810163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.819166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:02.981 [2024-07-15 16:32:45.819458] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.819957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.819988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.820006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.820245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.820489] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.820512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.820527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.824118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.833449] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.834051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.834090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.834113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.834366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.834616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.834640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.834670] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.838272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.847373] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.847863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.847895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.847913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.848153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.848396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.848420] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.848436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.852032] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.861332] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.861856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.861888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.861906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.862146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.862389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.981 [2024-07-15 16:32:45.862412] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.981 [2024-07-15 16:32:45.862428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.981 [2024-07-15 16:32:45.866019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.981 [2024-07-15 16:32:45.875351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.981 [2024-07-15 16:32:45.875921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.981 [2024-07-15 16:32:45.875960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.981 [2024-07-15 16:32:45.875983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.981 [2024-07-15 16:32:45.876251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.981 [2024-07-15 16:32:45.876505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.876530] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.876549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.880181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.982 [2024-07-15 16:32:45.889279] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.982 [2024-07-15 16:32:45.889861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.982 [2024-07-15 16:32:45.889894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.982 [2024-07-15 16:32:45.889912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.982 [2024-07-15 16:32:45.890164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.982 [2024-07-15 16:32:45.890408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.890431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.890447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.894043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.982 [2024-07-15 16:32:45.903140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.982 [2024-07-15 16:32:45.903635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.982 [2024-07-15 16:32:45.903665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.982 [2024-07-15 16:32:45.903684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.982 [2024-07-15 16:32:45.903934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.982 [2024-07-15 16:32:45.904178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.904202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.904217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.907803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.982 [2024-07-15 16:32:45.912467] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.982 [2024-07-15 16:32:45.912503] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.982 [2024-07-15 16:32:45.912519] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.982 [2024-07-15 16:32:45.912532] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.982 [2024-07-15 16:32:45.912543] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.982 [2024-07-15 16:32:45.912625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.982 [2024-07-15 16:32:45.912682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.982 [2024-07-15 16:32:45.912685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.982 [2024-07-15 16:32:45.917122] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.982 [2024-07-15 16:32:45.917570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.982 [2024-07-15 16:32:45.917602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.982 [2024-07-15 16:32:45.917622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.982 [2024-07-15 16:32:45.917884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.982 [2024-07-15 16:32:45.918132] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.918156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.918182] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.921781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.982 [2024-07-15 16:32:45.931153] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.982 [2024-07-15 16:32:45.931681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.982 [2024-07-15 16:32:45.931721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.982 [2024-07-15 16:32:45.931750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.982 [2024-07-15 16:32:45.932002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.982 [2024-07-15 16:32:45.932266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.932291] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.932310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.935938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.982 [2024-07-15 16:32:45.945151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.982 [2024-07-15 16:32:45.945749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.982 [2024-07-15 16:32:45.945791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:02.982 [2024-07-15 16:32:45.945813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:02.982 [2024-07-15 16:32:45.946062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:02.982 [2024-07-15 16:32:45.946312] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.982 [2024-07-15 16:32:45.946336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.982 [2024-07-15 16:32:45.946355] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.982 [2024-07-15 16:32:45.949978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:45.959252] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:45.959727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:45.959776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:45.959799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:45.960047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:45.960299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:45.960323] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:45.960342] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:45.963946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:45.973321] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:45.973815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:45.973852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:45.973874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:45.974120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:45.974367] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:45.974392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:45.974410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:45.978057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:45.987430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:45.987906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:45.987947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:45.987968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:45.988223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:45.988472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:45.988496] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:45.988515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:45.992151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:46.001486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:46.001883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:46.001916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:46.001935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:46.002189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:46.002433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:46.002456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:46.002473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:46.006083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:46.015097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:46.015532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:46.015559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:46.015590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:46.015828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:46.016070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:46.016091] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:46.016105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:46.019292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.242 [2024-07-15 16:32:46.028524] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.242 [2024-07-15 16:32:46.028895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.242 [2024-07-15 16:32:46.028923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.242 [2024-07-15 16:32:46.028940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.242 [2024-07-15 16:32:46.029166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.242 [2024-07-15 16:32:46.029384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.242 [2024-07-15 16:32:46.029406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.242 [2024-07-15 16:32:46.029425] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.242 [2024-07-15 16:32:46.032635] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 [2024-07-15 16:32:46.042070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.042498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.042524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.042554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.042790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.043010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.043058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.043071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 [2024-07-15 16:32:46.046311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 [2024-07-15 16:32:46.054694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.243 [2024-07-15 16:32:46.055574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.055974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.056016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.056032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.056240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.056459] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.056479] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.056492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 [2024-07-15 16:32:46.059692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 [2024-07-15 16:32:46.069073] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.069486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.069512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.069527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.069757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.069997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.070019] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.070048] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 [2024-07-15 16:32:46.073241] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 [2024-07-15 16:32:46.082592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.083103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.083138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.083173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.083406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.083623] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.083645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.083662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 [2024-07-15 16:32:46.086924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 Malloc0 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 [2024-07-15 16:32:46.096315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.096800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.096830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.096848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.097087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.097318] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.097339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.097354] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.243 [2024-07-15 16:32:46.100641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.243 [2024-07-15 16:32:46.109862] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.243 [2024-07-15 16:32:46.110246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.243 [2024-07-15 16:32:46.110287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121da10 with addr=10.0.0.2, port=4420 00:34:03.243 [2024-07-15 16:32:46.110303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da10 is same with the state(5) to be set 00:34:03.243 [2024-07-15 16:32:46.110544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da10 (9): Bad file descriptor 00:34:03.243 [2024-07-15 16:32:46.110784] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.243 [2024-07-15 16:32:46.110806] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.243 [2024-07-15 16:32:46.110820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.243 [2024-07-15 16:32:46.112572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.243 [2024-07-15 16:32:46.114071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.243 16:32:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.244 16:32:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 475506 00:34:03.244 [2024-07-15 16:32:46.123475] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.503 [2024-07-15 16:32:46.280982] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:13.481 00:34:13.481 Latency(us) 00:34:13.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.481 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:13.481 Verification LBA range: start 0x0 length 0x4000 00:34:13.481 Nvme1n1 : 15.01 6857.78 26.79 9248.43 0.00 7923.81 813.13 20874.43 00:34:13.481 =================================================================================================================== 00:34:13.481 Total : 6857.78 26.79 9248.43 0.00 7923.81 813.13 20874.43 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:13.481 rmmod nvme_tcp 00:34:13.481 rmmod nvme_fabrics 00:34:13.481 rmmod nvme_keyring 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 476171 ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 476171 ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 476171' 00:34:13.481 killing process with pid 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 476171 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.481 16:32:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.860 16:32:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.860 00:34:14.860 real 0m22.346s 00:34:14.860 user 0m59.355s 00:34:14.860 sys 0m4.528s 00:34:14.860 16:32:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:14.860 16:32:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:14.860 ************************************ 00:34:14.860 END TEST nvmf_bdevperf 00:34:14.860 ************************************ 00:34:14.860 16:32:57 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:14.860 16:32:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:14.860 16:32:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:14.860 16:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.860 ************************************ 00:34:14.860 START TEST nvmf_target_disconnect 00:34:14.860 ************************************ 00:34:14.860 16:32:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:15.119 * Looking for test storage... 00:34:15.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:15.119 16:32:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:17.024 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:17.024 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:17.024 Found net devices under 0000:84:00.0: cvl_0_0 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:17.024 Found net devices under 0000:84:00.1: cvl_0_1 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:17.024 16:32:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.282 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.282 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.282 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:17.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:34:17.282 00:34:17.282 --- 10.0.0.2 ping statistics --- 00:34:17.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.283 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:34:17.283 00:34:17.283 --- 10.0.0.1 ping statistics --- 00:34:17.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.283 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:17.283 ************************************ 00:34:17.283 START TEST nvmf_target_disconnect_tc1 00:34:17.283 ************************************ 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.283 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.283 [2024-07-15 16:33:00.159698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.283 [2024-07-15 16:33:00.159797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206bef0 with addr=10.0.0.2, port=4420 00:34:17.283 [2024-07-15 16:33:00.159848] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:17.283 [2024-07-15 16:33:00.159869] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:17.283 [2024-07-15 16:33:00.159882] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:17.283 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:17.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:17.283 Initializing NVMe Controllers 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:17.283 00:34:17.283 real 0m0.097s 00:34:17.283 user 0m0.036s 00:34:17.283 sys 0m0.058s 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:17.283 ************************************ 00:34:17.283 END TEST nvmf_target_disconnect_tc1 00:34:17.283 ************************************ 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:17.283 ************************************ 00:34:17.283 START TEST nvmf_target_disconnect_tc2 00:34:17.283 ************************************ 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=479353 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 479353 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 479353 ']' 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:17.283 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.541 [2024-07-15 16:33:00.275630] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:17.541 [2024-07-15 16:33:00.275709] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.541 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.541 [2024-07-15 16:33:00.346870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.542 [2024-07-15 16:33:00.438767] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.542 [2024-07-15 16:33:00.438845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.542 [2024-07-15 16:33:00.438875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.542 [2024-07-15 16:33:00.438886] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.542 [2024-07-15 16:33:00.438896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.542 [2024-07-15 16:33:00.438992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.542 [2024-07-15 16:33:00.439308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.542 [2024-07-15 16:33:00.439382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.542 [2024-07-15 16:33:00.439378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 Malloc0 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 [2024-07-15 16:33:00.601322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 [2024-07-15 16:33:00.629595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=479397 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.800 16:33:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:17.800 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.793 16:33:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 479353 00:34:19.793 16:33:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.793 Read completed with error (sct=0, sc=8) 00:34:19.793 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 [2024-07-15 16:33:02.654184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 [2024-07-15 16:33:02.654570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Read completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.794 Write completed with error (sct=0, sc=8) 00:34:19.794 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 [2024-07-15 16:33:02.654933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Read completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 Write completed with error (sct=0, sc=8) 00:34:19.795 starting I/O failed 00:34:19.795 [2024-07-15 16:33:02.655288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:19.795 [2024-07-15 16:33:02.655497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.655542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.655733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.655824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.655961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.655987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.656164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.656205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.656373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.656401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.656524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.656566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.656734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.656770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.656918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.656943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.657077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.657115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.657292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.795 [2024-07-15 16:33:02.657320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.795 qpair failed and we were unable to recover it. 00:34:19.795 [2024-07-15 16:33:02.657436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.657464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.657614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.657641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.657819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.657845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.657974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.657999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.658131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.658172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.658363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.658388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.658573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.658600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.658802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.658828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.658962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.658987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.659203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.659232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.659426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.659467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.659668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.659695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.659864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.659889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.660105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.660128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.660329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.660357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.660516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.660552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.660779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.660811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.660982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.661007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.661196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.661234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.661369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.661394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.661651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.661679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.661828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.661853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.662007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.662032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.662257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.662285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.662446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.662474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.662593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.662632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.662825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.662852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.663014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.663054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.663180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.663203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.663425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.663449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.663599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.796 [2024-07-15 16:33:02.663627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.796 qpair failed and we were unable to recover it. 00:34:19.796 [2024-07-15 16:33:02.663874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.663904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.664034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.664062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.664197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.664225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.664379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.664416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.664654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.664682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.664903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.664929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.665123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.665293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.665465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.665654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.665814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.665985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.666009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.666227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.666256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.666436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.666463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.666682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.666719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.666874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.666898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.667043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.667068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.667212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.667239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.667436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.667458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.667654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.667682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.667838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.667867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.668098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.668121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.668320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.668346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.668493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.668531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.668715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.668744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.668910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.668938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.669167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.669195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.669315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.669338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.669554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.669582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.669730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.797 [2024-07-15 16:33:02.669766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.797 qpair failed and we were unable to recover it. 00:34:19.797 [2024-07-15 16:33:02.669909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.669934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.670097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.670139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.670285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.670312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.670519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.670543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.670745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.670892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.670919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.671085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.671107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.671358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.671391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.671534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.671561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.671709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.671745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.671897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.671925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.672103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.672138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.672294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.672318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.672445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.672469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.672674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.672701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.672908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.672933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.673064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.673105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.673275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.673303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.673477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.673500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.673746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.673775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.673940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.673964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.674119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.674142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.674308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.674335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.674513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.674540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.674732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.674776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.674928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.674955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.675126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.675154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.675262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.675285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.675546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.675579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.675751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.675779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.675949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.675974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.798 qpair failed and we were unable to recover it. 00:34:19.798 [2024-07-15 16:33:02.676212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.798 [2024-07-15 16:33:02.676240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.676348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.676376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.676552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.676590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.676744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.676773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.676943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.676984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.677128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.677154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.677367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.677394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.677563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.677591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.677706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.677731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.677923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.677963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.678124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.678346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.678501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.678673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.678839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.678975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.679001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.679199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.679226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.679468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.679492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.679673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.679700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.679845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.679873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.679981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.680006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.680148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.680172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.680367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.680395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.680563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.680586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.680791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.680820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.680981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.681007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.799 qpair failed and we were unable to recover it. 00:34:19.799 [2024-07-15 16:33:02.681165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.799 [2024-07-15 16:33:02.681188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.681384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.681412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.681523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.681551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.681670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.681694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.681846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.681873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.681992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.682019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.682228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.682251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.682408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.682436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.682584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.682612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.682859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.682884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.683919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.683945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.684930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.684956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.685059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.685100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.685210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.685238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.800 [2024-07-15 16:33:02.685404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.800 [2024-07-15 16:33:02.685428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.800 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.685539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.685564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.685707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.685757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.685890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.685931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.686924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.686954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.687143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.687335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.687483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.687735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.687883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.687994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.688181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.688356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.688531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.688753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.688931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.688959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.689186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.689383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.689536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.689704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.689872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.689978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.690003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.690203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.690236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.690391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.690428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.690601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.690629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.690776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.801 [2024-07-15 16:33:02.690801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.801 qpair failed and we were unable to recover it. 00:34:19.801 [2024-07-15 16:33:02.690927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.690952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.691146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.691174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.691340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.691368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.691533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.691566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.691711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.691743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.691861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.691886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.692895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.692921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.693868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.693893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.694961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.694986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.695201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.695241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.695406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.695434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.695620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.695644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.695874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.695903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.695998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.696026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.696234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.696257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.696410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.696438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.802 [2024-07-15 16:33:02.696644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.802 [2024-07-15 16:33:02.696676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.802 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.696908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.696934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.697066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.697106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.697259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.697291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.697487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.697511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.697668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.697696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.697837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.697865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.698025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.698061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.698283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.698311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.698454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.698482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.698664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.698687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.698827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.698869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.699001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.699029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.699257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.699281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.699503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.699539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.699694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.699721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.699845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.699870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.700927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.700952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.701080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.701104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.701252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.701279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.701425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.701463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.701610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.701662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.701868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.701901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.702054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.702078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.702305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.702333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.702486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.803 [2024-07-15 16:33:02.702514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.803 qpair failed and we were unable to recover it. 00:34:19.803 [2024-07-15 16:33:02.702659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.702697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.702870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.702898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.703921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.703963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.704933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.704958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.705084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.705111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.705259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.705297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.705468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.705497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.705672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.705699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.705856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.705883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.706929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.706953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.707139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.707178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.707305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.707332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.707538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.707562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.707700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.707745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.707882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.707910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.708125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.708148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.708304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.708332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.804 qpair failed and we were unable to recover it. 00:34:19.804 [2024-07-15 16:33:02.708462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.804 [2024-07-15 16:33:02.708489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.708634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.708673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.708835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.708863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.709057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.709216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.709441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.709618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.709854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.709973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.710966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.710991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.711122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.711149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.711342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.711366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.711541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.711568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.711682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.711711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.711839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.711864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.712090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.712118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.712263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.712291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.712464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.712488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.712646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.712686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.805 qpair failed and we were unable to recover it. 00:34:19.805 [2024-07-15 16:33:02.712848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.805 [2024-07-15 16:33:02.712873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.713007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.713032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.713165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.713206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.713435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.713463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.713643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.713666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.713849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.713878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.714034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.714232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.714489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.714667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.714837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.714966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.715141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.715338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.715558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.715813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.715964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.715988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.716154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.716178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.716405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.716432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.716591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.716614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.716775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.716823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.716960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.716987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.717169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.717193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.717391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.717427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.717573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.717606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.717814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.717840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.717974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.717999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.718134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.718162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.718291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.718329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.718532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.718565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.806 [2024-07-15 16:33:02.718751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.806 [2024-07-15 16:33:02.718793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.806 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.718970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.718994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.719194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.719222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.719361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.719389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.719581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.719604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.719777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.719805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.719906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.719934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.720093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.720132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.720256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.720294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.720484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.720516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.720698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.720741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.720873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.720901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.721922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.721948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.722131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.722159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.722341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.722364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.722509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.722537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.722744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.722773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.722903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.722928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.723061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.723101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.723254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.723288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.723411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.723435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.723700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.723728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.723939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.723963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.724096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.724119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.724290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.724318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.724448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.724480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.724663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.724691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.724847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.807 [2024-07-15 16:33:02.724873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.807 qpair failed and we were unable to recover it. 00:34:19.807 [2024-07-15 16:33:02.725119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.725151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.725388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.725412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.725572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.725600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.725840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.725864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.725999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.726038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.726231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.726257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.726457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.726484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.726682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.726704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.726883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.726906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.727163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.727195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.727429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.727452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.727642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.727669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.727876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.727903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.728055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.728077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.728321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.728349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.728485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.728512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.728656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.728681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.728810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.728835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.729952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.729993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.730242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.730271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.730388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.730413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.730570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.730595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.730768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.730797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.730955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.730979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.731174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.731202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.731365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.731393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.808 [2024-07-15 16:33:02.731547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.808 [2024-07-15 16:33:02.731574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.808 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.731697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.731725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.731874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.731899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.732954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.732982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.733195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.733338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.733500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.733657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.733839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.733993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.734249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.734558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.734725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.734916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.734958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.735868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.735892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.736857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.809 qpair failed and we were unable to recover it. 00:34:19.809 [2024-07-15 16:33:02.736985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.809 [2024-07-15 16:33:02.737026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.737161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.737201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.737321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.737360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.737585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.737613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.737794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.737892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.737929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.738086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.738114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.738310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.738334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.738490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.738518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.738655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.738682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.738849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.738875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.739886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.739911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.740937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.740963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.741161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.741326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.741486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.741657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.741844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.741976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.742001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.742103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.810 [2024-07-15 16:33:02.742129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.810 qpair failed and we were unable to recover it. 00:34:19.810 [2024-07-15 16:33:02.742257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.742297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.742447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.742486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.742634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.742662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.742767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.742796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.742906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.742931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.743092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.743246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.743419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.743548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.811 [2024-07-15 16:33:02.743683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.811 qpair failed and we were unable to recover it. 00:34:19.811 [2024-07-15 16:33:02.743813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.743838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.744915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.744940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.745098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.745140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.745258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.745286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.745458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.745497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.745643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.745671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.745872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.745901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.746883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.746909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.747879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.747997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.748155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.748402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.748553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.748695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.748897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.812 [2024-07-15 16:33:02.748925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.812 qpair failed and we were unable to recover it. 00:34:19.812 [2024-07-15 16:33:02.749058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.749083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.749212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.749237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.749470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.749497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.749669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.749696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.749839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.749865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.749976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.750236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.750404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.750618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.750808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.750966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.750991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.751120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.751152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.751294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.751319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.751447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.751472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.751614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.751642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.751838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.751864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.752919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.752944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.753971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.753996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.754130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.754158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.754277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.754302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.754439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.754464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.813 [2024-07-15 16:33:02.754638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.813 [2024-07-15 16:33:02.754666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.813 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.754811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.754837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.754993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.755018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.755252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.755280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.755448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.755473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.755578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.755604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.755802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.755827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.756932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.756965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.757124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.757149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.757375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.757403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.757520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.757548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.757711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.757749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.757885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.757927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.758942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.758984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.759922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.759950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.760065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.760094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.814 [2024-07-15 16:33:02.760194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.814 [2024-07-15 16:33:02.760218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.814 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.760350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.760375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.760545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.760573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.760727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.760771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.760926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.761877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.761987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.762921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.762949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:19.815 qpair failed and we were unable to recover it. 00:34:19.815 [2024-07-15 16:33:02.763925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.815 [2024-07-15 16:33:02.763953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.082 qpair failed and we were unable to recover it. 00:34:20.082 [2024-07-15 16:33:02.764197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.082 [2024-07-15 16:33:02.764223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.082 qpair failed and we were unable to recover it. 00:34:20.082 [2024-07-15 16:33:02.764365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.082 [2024-07-15 16:33:02.764393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.082 qpair failed and we were unable to recover it. 00:34:20.082 [2024-07-15 16:33:02.764538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.082 [2024-07-15 16:33:02.764568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.082 qpair failed and we were unable to recover it. 00:34:20.082 [2024-07-15 16:33:02.764706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.082 [2024-07-15 16:33:02.764732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.082 qpair failed and we were unable to recover it. 00:34:20.082 [2024-07-15 16:33:02.764876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.764925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.765912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.765938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.766972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.766997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.767171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.767202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.767359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.767384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.767579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.767607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.767748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.767792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.767891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.767916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.768146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.768174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.768307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.768334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.768501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.768528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.768659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.768701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.768849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.768878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.769899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.769927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.770151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.770175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.770350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.770378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.083 qpair failed and we were unable to recover it. 00:34:20.083 [2024-07-15 16:33:02.770544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.083 [2024-07-15 16:33:02.770572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.770701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.770725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.770833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.770858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.771897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.771939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.772973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.772997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.773210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.773238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.773374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.773399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.773517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.773542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.773770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.773795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.773925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.773950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.774159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.774187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.774356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.774383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.774600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.774624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.774743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.774772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.774884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.774912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.775949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.775977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.084 [2024-07-15 16:33:02.776214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.084 [2024-07-15 16:33:02.776245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.084 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.776387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.776415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.776580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.776607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.776878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.776904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.777946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.777971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.778099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.778124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.778302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.778329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.778500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.778523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.778751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.778794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.778950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.778975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.779199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.779223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.779401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.779433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.779708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.779742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.779910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.779940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.780055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.780079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.780316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.780352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.780486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.780510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.780677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.780719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.780910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.780939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.781061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.781099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.781339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.781370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.781574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.781602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.781813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.781838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.782018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.782178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.782206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.782409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.085 [2024-07-15 16:33:02.782432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.085 qpair failed and we were unable to recover it. 00:34:20.085 [2024-07-15 16:33:02.782592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.782625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.782769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.782798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.782922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.782951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.783116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.783157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.783316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.783344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.783573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.783601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.783733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.783766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.783912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.783940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.784100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.784124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.784300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.784328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.784442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.784470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.784652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.784679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.784860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.784886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.785002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.785043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.785205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.785229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.785407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.785435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.785598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.785626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.785852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.785881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.786022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.786049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.786244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.786272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.786428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.786451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.786660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.786688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.786863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.786891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.787079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.787103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.787269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.787297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.787474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.787501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.787684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.787708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.787938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.787971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.788184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.788212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.788358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.788381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.788524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.788564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.788761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.788790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.788941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.086 [2024-07-15 16:33:02.788966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.086 qpair failed and we were unable to recover it. 00:34:20.086 [2024-07-15 16:33:02.789167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.789194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.789319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.789346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.789613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.789644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.789830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.789859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.790891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.790917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.791051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.791079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.791247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.791271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.791456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.791484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.791648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.791675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.791893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.791929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.792970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.792994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.793150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.793174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.793365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.793393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.793561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.793585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.793813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.793844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.793987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.794014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.794180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.794203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.794370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.794398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.794606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.087 [2024-07-15 16:33:02.794634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.087 qpair failed and we were unable to recover it. 00:34:20.087 [2024-07-15 16:33:02.794775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.794800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.794945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.794985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.795118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.795146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.795284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.795322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.795503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.795531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.795697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.795735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.795919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.795944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.796093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.796138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.796304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.796332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.796489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.796513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.796704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.796732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.796883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.796912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.797070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.797094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.797232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.797274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.797465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.797493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.797660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.797688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.797838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.797864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.798040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.798063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.798280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.798304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.798485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.798514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.798656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.798684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.798866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.798898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.799063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.799087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.799271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.088 [2024-07-15 16:33:02.799299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.088 qpair failed and we were unable to recover it. 00:34:20.088 [2024-07-15 16:33:02.799463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.799495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.799672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.799700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.799875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.799900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.800003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.800043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.800194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.800236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.800403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.800439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.800601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.800632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.800846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.800874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.801019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.801047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.801297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.801320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.801503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.801531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.801752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.801784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.801922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.801947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.802147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.802175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.802327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.802355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.802522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.802554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.802750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.802933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.802961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.803083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.803121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.803261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.803286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.803445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.803473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.803617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.803659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.803822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.803865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.804963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.804988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.805107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.805135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.089 qpair failed and we were unable to recover it. 00:34:20.089 [2024-07-15 16:33:02.805313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.089 [2024-07-15 16:33:02.805338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.805499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.805527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.805695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.805722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.805923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.805948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.806132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.806160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.806286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.806314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.806494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.806532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.806753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.806782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.806974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.807002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.807170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.807194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.807371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.807399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.807553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.807580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.807776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.807815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.808024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.808056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.808178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.808206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.808408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.808432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.808650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.808678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.808890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.808918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.809092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.809132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.809271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.809308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.809498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.809532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.809683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.809711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.809883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.809912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.810078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.810109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.810336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.810360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.810512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.810540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.810772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.810803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.811039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.811062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.811206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.811234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.811380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.811408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.811559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.090 [2024-07-15 16:33:02.811587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.090 qpair failed and we were unable to recover it. 00:34:20.090 [2024-07-15 16:33:02.811751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.811812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.811915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.811940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.812137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.812169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.812347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.812375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.812622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.812655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.812887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.812922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.813118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.813148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.813291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.813319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.813441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.813465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.813619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.813644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.813837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.813871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.814093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.814119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.814321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.814355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.814585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.814615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.814898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.814923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.815057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.815085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.815250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.815278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.815401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.815439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.815704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.815732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.815937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.815965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.816183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.816206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.816396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.816433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.816624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.816652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.816833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.816859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.817029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.817056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.817255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.817283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.817587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.817636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.817797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.817825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.818007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.818035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.818224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.818247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.818471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.818499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.818648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.818676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.091 [2024-07-15 16:33:02.818819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.091 [2024-07-15 16:33:02.818844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.091 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.819031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.819056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.819285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.819312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.819492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.819515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.819713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.819746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.819915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.819939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.820166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.820200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.820455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.820483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.820657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.820694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.820901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.820927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.821125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.821153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.821336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.821364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.821524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.821548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.821773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.821812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.821959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.821987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.822135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.822158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.822352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.822387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.822519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.822547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.822748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.822782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.822937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.822965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.823171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.823371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.823558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.823706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.823849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.823991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.824032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.824227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.824266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.824404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.824432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.824613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.824641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.824756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.824795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.825014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.825048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.825211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.825242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.825368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.092 [2024-07-15 16:33:02.825396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.092 qpair failed and we were unable to recover it. 00:34:20.092 [2024-07-15 16:33:02.825560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.825589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.825812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.825844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.825982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.826012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.826253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.826285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.826494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.826517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.826700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.826734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.826897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.826923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.827090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.827118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.827302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.827326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.827539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.827569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.827733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.827772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.827947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.827975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.828096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.828134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.828335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.828390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.828541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.828573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.828770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.828803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.828977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.829002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.829188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.829238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.829374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.829402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.829583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.829610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.829797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.829821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.830028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.830061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.830181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.830209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.830374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.830402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.830594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.830625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.830803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.830831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.831192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.831241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.831387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.093 [2024-07-15 16:33:02.831415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.093 qpair failed and we were unable to recover it. 00:34:20.093 [2024-07-15 16:33:02.831565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.831604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.831773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.831801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.832957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.832986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.833126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.833164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.833286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.833310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.833443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.833471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.833643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.833670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.833835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.833861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.834015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.834055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.834230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.834259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.834528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.834555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.834752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.834801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.834962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.834987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.835146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.835178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.835281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.835309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.835525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.835549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.835730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.835766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.835986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.836014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.836206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.836238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.836432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.836455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.094 [2024-07-15 16:33:02.836575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.094 [2024-07-15 16:33:02.836603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.094 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.836793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.836844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.836981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.837025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.837144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.837168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.837354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.837395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.837539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.837566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.837783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.837812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.838036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.838063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.838246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.838289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.838425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.838453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.838658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.838686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.838884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.838915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.839029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.839071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.839300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.839339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.839557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.839584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.839745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.839769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.839983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.840170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.840324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.840551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.840733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.840944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.840976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.841121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.841152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.841339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.841362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.841568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.841596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.841796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.841825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.841967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.841995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.842174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.842197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.842425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.842476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.842714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.842748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.095 [2024-07-15 16:33:02.842942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.095 [2024-07-15 16:33:02.842970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.095 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.843154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.843177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.843350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.843400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.843549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.843580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.843809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.843844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.844134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.844157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.844324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.844373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.844562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.844595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.844731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.844782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.844897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.844922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.845142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.845170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.845335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.845363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.845585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.845625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.845764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.845803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.845953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.845996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.846166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.846194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.846366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.846393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.846626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.846649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.846871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.846910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.847077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.847105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.847249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.847277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.847460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.847483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.847675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.847705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.847834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.847863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.848013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.848041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.848201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.848225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.848485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.848513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.848732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.848766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.848936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.848964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.849137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.849161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.849340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.849394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.096 [2024-07-15 16:33:02.849710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.096 [2024-07-15 16:33:02.849756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.096 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.849904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.849932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.850093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.850117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.850244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.850286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.850447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.850475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.850589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.850616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.850842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.850867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.851047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.851106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.851252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.851291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.851449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.851478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.851693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.851721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.851877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.851903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.852128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.852156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.852372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.852403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.852571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.852599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.852766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.852809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.852999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.853257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.853473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.853648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.853815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.853966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.853993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.854147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.854186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.854321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.854363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.854525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.854552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.854759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.854787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.854936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.854961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.855147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.855201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.855337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.855364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.855576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.855604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.855767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-07-15 16:33:02.855791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.097 qpair failed and we were unable to recover it. 00:34:20.097 [2024-07-15 16:33:02.855932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.855957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.856106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.856134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.856275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.856301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.856484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.856506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.856649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.856692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.856894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.856918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.857103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.857129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.857257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.857294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.857492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.857518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.857671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.857704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.857936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.857973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.858114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.858136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.858333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.858386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.858558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.858591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.858787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.858815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.858997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.859036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.859214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.859263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.859405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.859436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.859640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.859671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.859816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.859840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.860168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.860228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.860462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.860491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.860655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.860682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.860838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.860863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.860983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.861015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.861253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.861285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.861391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.861417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.861568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.861591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.861809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.861837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.861981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-07-15 16:33:02.862008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.098 qpair failed and we were unable to recover it. 00:34:20.098 [2024-07-15 16:33:02.862152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.862178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.862367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.862396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.862556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.862582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.862685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.862711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.862886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.862914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.863121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.863144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.863303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.863359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.863456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.863482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.863625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.863651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.863808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.863832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.864007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.864055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.864237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.864263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.864462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.864489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.864657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.864683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.864891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.864916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.865059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.865085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.865207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.865244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.865411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.865448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.865731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.865762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.865968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.865998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.866181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.866208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.866425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.866447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.866637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.866680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.866921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-07-15 16:33:02.866952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.099 qpair failed and we were unable to recover it. 00:34:20.099 [2024-07-15 16:33:02.867125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.867152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.867357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.867380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.867550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.867581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.867745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.867777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.867972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.868004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.868205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.868227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.868377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.868428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.868633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.868659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.868870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.868904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.869067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.869089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.869305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.869351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.869497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.869527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.869757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.869792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.869904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.869928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.870114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.870160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.870279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.870305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.870560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.870591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.870778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.870802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.870992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.871019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.871229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.871260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.871398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.871424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.871615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.871637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.871828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.871855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.872045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.872071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.872232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.872258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.872425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.872448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.872626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.872662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.872794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.872822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.873019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.873050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.873216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-07-15 16:33:02.873238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.100 qpair failed and we were unable to recover it. 00:34:20.100 [2024-07-15 16:33:02.873451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.873504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.873651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.873678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.873903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.873933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.874136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.874174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.874333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.874371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.874677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.874704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.874837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.874867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.875039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.875076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.875234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.875285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.875406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.875433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.875610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.875636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.875806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.875829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.876110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.876152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.876308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.876339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.876481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.876508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.876685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.876724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.876932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.876965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.877181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.877217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.877381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.877408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.877565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.877588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.877721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.877773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.877911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.877937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.878106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.878133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.878302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.878333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.878442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.878483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.878734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.878767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.878912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.878947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.879095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.879123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.879341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.879367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.879506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.879533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.879784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.879947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.879970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.880172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.101 [2024-07-15 16:33:02.880236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.101 qpair failed and we were unable to recover it. 00:34:20.101 [2024-07-15 16:33:02.880379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.880407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.880524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.880551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.880734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.880791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.880956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.880980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.881177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.881204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.881356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.881383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.881609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.881635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.881850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.881883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.882049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.882075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.882274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.882305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.882494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.882516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.882731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.882800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.882996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.883041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.883194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.883220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.883435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.883469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.883605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.883635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.883808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.883835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.884011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.884037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.884253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.884276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.884468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.884494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.884709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.884745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.884891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.884922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.885080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.885103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.102 qpair failed and we were unable to recover it. 00:34:20.102 [2024-07-15 16:33:02.885235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.102 [2024-07-15 16:33:02.885279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.682 qpair failed and we were unable to recover it. 00:34:20.682 [2024-07-15 16:33:03.350989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.351195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.351341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.351495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.351675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.351880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.351909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.352849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.352873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.353936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.353966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.354968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.354993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.355207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.355242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.355412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.355441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.355652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.355676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.683 qpair failed and we were unable to recover it. 00:34:20.683 [2024-07-15 16:33:03.355857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.683 [2024-07-15 16:33:03.355882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.356209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.356435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.356566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.356730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.356881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.356910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.357957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.357983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.358128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.358157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.358271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.358299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.358425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.358450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.358660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.358688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.358850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.358876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.359044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.359073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.359219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.359242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.359476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.359533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.359754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.359783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.359899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.359926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.360107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.360261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.360440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.360604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.684 [2024-07-15 16:33:03.360827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.684 qpair failed and we were unable to recover it. 00:34:20.684 [2024-07-15 16:33:03.360952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.360980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.361146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.361175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.361323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.361351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.361477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.361500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.361660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.361685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.361839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.361879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.362943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.362978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.363136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.363179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.363324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.363353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.363465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.363494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.363690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.363714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.363886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.363915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.364049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.364077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.364214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.364242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.364423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.364446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.364629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.364671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.364870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.364899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.365045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.365073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.365277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.365300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.365456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.365484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.365745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.365774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.365893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.365921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.366079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.685 [2024-07-15 16:33:03.366103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.685 qpair failed and we were unable to recover it. 00:34:20.685 [2024-07-15 16:33:03.366312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.366360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.366583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.366611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.366766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.366794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.366962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.366987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.367221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.367279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.367511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.367545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.367718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.367753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.367877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.367902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.368066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.368106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.368314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.368353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.368545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.368603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.368764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.368791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.368940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.368965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.369089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.369117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.369266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.369294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.369476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.369500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.369645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.369688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.369919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.369944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.370120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.370161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.370281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.370319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.370519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.370548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.370697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.370726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.370873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.370901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.371055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.371079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.371323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.371381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.371524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.371560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.371670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.371698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.686 qpair failed and we were unable to recover it. 00:34:20.686 [2024-07-15 16:33:03.371844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.686 [2024-07-15 16:33:03.371870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.372970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.372999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.373178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.373206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.373379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.373402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.373595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.373623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.373766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.373795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.373987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.374141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.374347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.374542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.374699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.374900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.374926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.375971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.375999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.376224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.376252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.376359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.376382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.376604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.376632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.376786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.376811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.687 [2024-07-15 16:33:03.376994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.687 [2024-07-15 16:33:03.377034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.687 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.377162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.377200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.377418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.377451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.377591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.377619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.377756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.377785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.377957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.377982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.378255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.378308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.378454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.378491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.378639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.378667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.378825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.378851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.379915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.379943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.380083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.380243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.380456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.380642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.380833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.380975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.381192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.381358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.381511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.381677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.381880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.381924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.382061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.382089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.688 qpair failed and we were unable to recover it. 00:34:20.688 [2024-07-15 16:33:03.382229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.688 [2024-07-15 16:33:03.382257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.382451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.382474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.382617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.382640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.382793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.382822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.383025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.383053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.383205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.383229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.383467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.383495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.383656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.383684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.383830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.383856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.384084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.384278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.384485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.384619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.384844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.384980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.385211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.385394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.385629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.385782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.385955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.385984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.689 [2024-07-15 16:33:03.386144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.689 [2024-07-15 16:33:03.386176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.689 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.386345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.386368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.386490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.386530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.386682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.386710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.386825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.386853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.386987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.387012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.387156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.387180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.387401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.387441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.387593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.387622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.387787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.387813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.387985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.388123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.388305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.388497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.388669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.388914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.388943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.389927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.389952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.390174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.390233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.390443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.390471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.390608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.390636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.390758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.390800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.390934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.390960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.391131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.391159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.690 [2024-07-15 16:33:03.391304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.690 [2024-07-15 16:33:03.391332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.690 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.391558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.391592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.391735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.391896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.391922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.392962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.392987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.393137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.393161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.393408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.393436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.393545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.393573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.393760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.393801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.393954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.393982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.394182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.394210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.394391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.394419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.394555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.394594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.394826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.394869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.395958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.395987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.396157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.396196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.396372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.396426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.396584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.691 [2024-07-15 16:33:03.396612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.691 qpair failed and we were unable to recover it. 00:34:20.691 [2024-07-15 16:33:03.396767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.396795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.396959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.396985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.397165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.397221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.397367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.397406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.397545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.397573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.397700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.397744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.397874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.397899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.398098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.398277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.398480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.398653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.398817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.398982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.399193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.399373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.399544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.399752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.399942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.399966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.400087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.400111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.400253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.400282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.400444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.400472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.400643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.400666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.400880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.400909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.401047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.401075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.401251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.401279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.401435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.401458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.401702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.401730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.401860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.692 [2024-07-15 16:33:03.401894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.692 qpair failed and we were unable to recover it. 00:34:20.692 [2024-07-15 16:33:03.402020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.402226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.402406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.402568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.402744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.402935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.402960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.403142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.403170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.403312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.403340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.403470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.403498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.403661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.403700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.403886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.403915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.404080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.404112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.404281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.404317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.404487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.404511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.404687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.404714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.404856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.404882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.405956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.405981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.406148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.406187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.406393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.406421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.406561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.406589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.406756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.406783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.406908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.406950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.407148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.693 [2024-07-15 16:33:03.407187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.693 qpair failed and we were unable to recover it. 00:34:20.693 [2024-07-15 16:33:03.407356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.407384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.407546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.407570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.407776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.407821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.407960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.407988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.408176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.408215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.408332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.408371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.408504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.408528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.408765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.408794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.408956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.408984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.409129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.409284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.409480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.409635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.409858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.409986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.410169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.410312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.410487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.410658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.410882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.410908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.411911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.411937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.412173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.412217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.694 qpair failed and we were unable to recover it. 00:34:20.694 [2024-07-15 16:33:03.412425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.694 [2024-07-15 16:33:03.412462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.412601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.412629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.412784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.412810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.412932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.412958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.413927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.413956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.414199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.414223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.414386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.414415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.414641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.414669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.414809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.414837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.414987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.415131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.415385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.415575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.415773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.415952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.415981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.416185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.695 [2024-07-15 16:33:03.416213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.695 qpair failed and we were unable to recover it. 00:34:20.695 [2024-07-15 16:33:03.416380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.416407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.416559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.416587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.416719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.416752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.416871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.416900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.417945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.417971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.418133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.418157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.418286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.418313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.418450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.418478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.418613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.418637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.418797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.418823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.419960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.419986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.420154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.420178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.420336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.420362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.420587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.420621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.420796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.420824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.420956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.420981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.421201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.421228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.696 [2024-07-15 16:33:03.421362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.696 [2024-07-15 16:33:03.421389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.696 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.421598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.421625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.421774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.421800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.421990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.422195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.422399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.422807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.422966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.422993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.423168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.423298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.423487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.423659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.423846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.423975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.424173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.424381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.424525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.424697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.424930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.424957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.425179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.425206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.425335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.425362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.425520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.425559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.425763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.425790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.425949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.425976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.426118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.426144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.426299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.426338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.426536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.426562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.697 [2024-07-15 16:33:03.426734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.697 [2024-07-15 16:33:03.426765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.697 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.426885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.426910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.427860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.427885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.428894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.428920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.429058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.429084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.429279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.429303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.429453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.429479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.429598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.429630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.429876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.430009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.430035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.430163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.430189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.430290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.430315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.430475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.430500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.430735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.430888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.431129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.431153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.431287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.431312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.431439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.431464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.431559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.431583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.698 qpair failed and we were unable to recover it. 00:34:20.698 [2024-07-15 16:33:03.431726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.698 [2024-07-15 16:33:03.431772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.431928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.431953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.432921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.432946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.433112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.433137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.433313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.433338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.433480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.433503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.433685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.433710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.433869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.433894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.434951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.434977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.435141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.435166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.435332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.435356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.435596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.435619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.435755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.435794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.435923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.435947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.436160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.436190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.436351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.436375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.436516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.436555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.699 qpair failed and we were unable to recover it. 00:34:20.699 [2024-07-15 16:33:03.436732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.699 [2024-07-15 16:33:03.436773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.436898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.436923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.437065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.437089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.437239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.437277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.437453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.437478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.437646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.437670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.437887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.437913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.438052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.438076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.438283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.438307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.438485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.438509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.438661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.438685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.438875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.438901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.439051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.439075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.439259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.439284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.439448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.439476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.439715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.440001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.440026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.440197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.440221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.440430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.440463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.440646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.440670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.440809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.440835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.441014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.441040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.441200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.441225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.441461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.441485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.441601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.441625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.441768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.441794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.442050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.442074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.442270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.442294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.700 [2024-07-15 16:33:03.442475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.700 [2024-07-15 16:33:03.442498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.700 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.442624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.442663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.442862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.442899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.443963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.443989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.444186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.444226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.444363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.444401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.444564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.444603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.444769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.444795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.444885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.444915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.445078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.445103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.445277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.445300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.445435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.445460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.445656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.445684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.445880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.445906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.446918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.446944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.447083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.447111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.447234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.447258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.447420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.701 [2024-07-15 16:33:03.447461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.701 qpair failed and we were unable to recover it. 00:34:20.701 [2024-07-15 16:33:03.447560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.447588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.447752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.447795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.447921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.447946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.448958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.448984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.449119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.449146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.449263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.449288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.449538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.449566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.449750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.449778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.449944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.449969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.450969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.450995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.451125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.451154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.451289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.451317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.702 [2024-07-15 16:33:03.451450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.702 [2024-07-15 16:33:03.451475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.702 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.451663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.451691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.451847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.451873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.452056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.452263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.452455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.452636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.452839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.452976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.453113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.453239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.453429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.453679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.453923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.453949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.454064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.454105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.454214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.454241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.454405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.454429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.454668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.454696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.454858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.454884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.455961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.455986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.456174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.456197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.456403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.456427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.456624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.456652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.703 [2024-07-15 16:33:03.456811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.703 [2024-07-15 16:33:03.456837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.703 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.456968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.456993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.457165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.457193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.457359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.457391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.457511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.457550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.457678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.457702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.457885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.457934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.458086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.458114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.458230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.458255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.458429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.458470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.458633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.458661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.458823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.458858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.459094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.459117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.459363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.459413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.459536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.459563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.459720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.459752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.459932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.459957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.460159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.460182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.460341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.460369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.460490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.460518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.460657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.460699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.460863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.460889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.461069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.461098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.461257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.461285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.461496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.461519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.461678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.461705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.461836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.461877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.462199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.462228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.704 qpair failed and we were unable to recover it. 00:34:20.704 [2024-07-15 16:33:03.462376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.704 [2024-07-15 16:33:03.462399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.462634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.462662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.462810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.462843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.462979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.463008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.463172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.463210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.463432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.463490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.463709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.463743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.463858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.463886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.464947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.464987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.465191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.465231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.465379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.465407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.465617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.465641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.465824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.465850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.466032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.466244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.466416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.466569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.466761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.466974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.467002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.467167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.467202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.467357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.467385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.467601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.467629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.467761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.467790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.468004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.468046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.468196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.468250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.468409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.705 [2024-07-15 16:33:03.468437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.705 qpair failed and we were unable to recover it. 00:34:20.705 [2024-07-15 16:33:03.468599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.468627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.468752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.468793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.468956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.468981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.469124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.469152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.469293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.469321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.469453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.469477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.469656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.469693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.469883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.469907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.470055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.470083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.470221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.470258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.470442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.470470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.470623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.470651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.470879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.470907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.471947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.471972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.472127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.472155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.472296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.472324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.472479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.472518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.472683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.472711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.472913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.472939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.473964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.473989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.706 [2024-07-15 16:33:03.474184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.706 [2024-07-15 16:33:03.474207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.706 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.474367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.474395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.474537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.474564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.474745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.474771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.474975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.475149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.475383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.475525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.475685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.475895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.475928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.476078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.476133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.476303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.476330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.476531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.476593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.476764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.476804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.476977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.477194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.477369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.477569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.477753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.477956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.477982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.478103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.478142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.478322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.478354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.478517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.478581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.478690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.478715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.478875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.478901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.479051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.479079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.479253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.479281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.479417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.479441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.479590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.479630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.479843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.479875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.480010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.480051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.480233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.480256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.707 qpair failed and we were unable to recover it. 00:34:20.707 [2024-07-15 16:33:03.480411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.707 [2024-07-15 16:33:03.480439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.480598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.480627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.480751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.480780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.480937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.480963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.481967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.481994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.482095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.482136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.482276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.482301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.482441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.482466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.482615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.482643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.482796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.482826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.483060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.483099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.483351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.483379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.483549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.483577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.483749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.483806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.483953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.483978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.484166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.484238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.484383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.484421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.484556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.484585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.484722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.484768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.484893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.708 [2024-07-15 16:33:03.484934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.708 qpair failed and we were unable to recover it. 00:34:20.708 [2024-07-15 16:33:03.485069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.485097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.485344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.485372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.485545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.485568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.485729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.485773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.485901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.485929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.486068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.486096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.486257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.486283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.486487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.486515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.486653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.486681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.486879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.486904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.487002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.487040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.487186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.487225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.487441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.487468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.487627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.487655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.487827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.487853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.488956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.488984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.489170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.489199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.489354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.489377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.489625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.489653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.489797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.489838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.490011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.490040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.490208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.490231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.490396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.490462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.490595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.709 [2024-07-15 16:33:03.490623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.709 qpair failed and we were unable to recover it. 00:34:20.709 [2024-07-15 16:33:03.490814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.490843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.491028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.491234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.491408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.491561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.491776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.491967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.492005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.492208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.492236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.492413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.492441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.492562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.492600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.492771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.492796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.493043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.493225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.493409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.493612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.493804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.493981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.494006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.494213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.494238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.494422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.494450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.494666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.494692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.494940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.494970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.495148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.495172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.495336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.495405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.495546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.495574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.495742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.495770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.495986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.496011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.496176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.496227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.496365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.496393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.496570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.710 [2024-07-15 16:33:03.496610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.710 qpair failed and we were unable to recover it. 00:34:20.710 [2024-07-15 16:33:03.496791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.496817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.496960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.496988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.497117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.497145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.497363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.497401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.497521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.497560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.497655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.497679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.497860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.497886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.498972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.498998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.499142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.499165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.499333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.499361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.499527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.499565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.499749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.499788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.499970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.499998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.500966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.500995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.501193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.501216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.501359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.501387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.501590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.501618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.501767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.501796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.502052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.502076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.502232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.711 [2024-07-15 16:33:03.502280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.711 qpair failed and we were unable to recover it. 00:34:20.711 [2024-07-15 16:33:03.502447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.502476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.502630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.502657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.502852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.502888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.503945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.503973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.504100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.504128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.504299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.504323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.504511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.504549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.504703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.504731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.504914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.504943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.505115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.505287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.505533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.505662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.505856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.505987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.506155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.506351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.506541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.506695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.506908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.506937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.507100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.507255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.507479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.507615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.507812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.507978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.508003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.508144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.508186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.712 qpair failed and we were unable to recover it. 00:34:20.712 [2024-07-15 16:33:03.508374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.712 [2024-07-15 16:33:03.508414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.508558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.508595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.508765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.508791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.508951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.508980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.509193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.509221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.509354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.509382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.509510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.509535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.509684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.509709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.509880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.509913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.510066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.510100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.510208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.510245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.510485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.510537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.510767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.510796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.510959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.510988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.511114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.511153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.511389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.511435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.511660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.511688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.511861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.511887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.511994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.512035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.512261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.512307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.512526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.512554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.512726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.512770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.512931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.512965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.513193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.513239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.513473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.513501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.513632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.513660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.513869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.513895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.514041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.514069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.713 [2024-07-15 16:33:03.514266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.713 [2024-07-15 16:33:03.514294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.713 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.514453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.514482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.514685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.514708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.514879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.514908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.515963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.515989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.516254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.516278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.516505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.516551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.516692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.516720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.516871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.516899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.517130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.517154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.517308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.517351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.517560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.517588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.517724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.517776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.517899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.517924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.518954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.518982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.519897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.519922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.520105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.520129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.520295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.714 [2024-07-15 16:33:03.520324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.714 qpair failed and we were unable to recover it. 00:34:20.714 [2024-07-15 16:33:03.520509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.520537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.520762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.520798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.520962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.520990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.521123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.521151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.521281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.521309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.521533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.521568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.521761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.521789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.521968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.521996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.522174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.522202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.522364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.522388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.522566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.522594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.522732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.522780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.522915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.522940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.523966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.523991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.524951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.524979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.525912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.525937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.526133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.526160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.715 [2024-07-15 16:33:03.526372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.715 [2024-07-15 16:33:03.526399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.715 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.526532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.526570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.526837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.526864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.527908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.527935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.528090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.528129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.528282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.528314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.528450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.528477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.528619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.528646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.528840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.528875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.529945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.529971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.530124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.530304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.530492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.530709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.530876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.530999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.531259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.531432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.531560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.531752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.531876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.531901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.532047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.532073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.532244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.532270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.716 qpair failed and we were unable to recover it. 00:34:20.716 [2024-07-15 16:33:03.532403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.716 [2024-07-15 16:33:03.532442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.532586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.532626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.532872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.532898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.533899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.533924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.534130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.534155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.534331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.534357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.534552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.534577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.534735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.534765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.534950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.534976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.535923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.535948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.536941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.536966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.537122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.537147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.537303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.537327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.537448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.537487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.537616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.537641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.537777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.537803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.538059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.717 [2024-07-15 16:33:03.538189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.717 [2024-07-15 16:33:03.538213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.717 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.538364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.538388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.538528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.538553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.538645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.538669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.538838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.538864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.539965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.539990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.540131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.540155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.540305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.540344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.540559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.540595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.540776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.540802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.540960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.540985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.541162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.541185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.541334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.541358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.541608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.541633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.541788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.541826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.542042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.542208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.542422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.542600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.542863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.542977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.543973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.543998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.718 [2024-07-15 16:33:03.544148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.718 [2024-07-15 16:33:03.544188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.718 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.544390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.544423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.544607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.544630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.544806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.544840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.544982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.545195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.545365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.545522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.545688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.545907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.545947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.546114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.546138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.546335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.546359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.546546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.546581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.546747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.546788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.546942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.546967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.547145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.547168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.547348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.547371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.547473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.547513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.547664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.547689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.547842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.547868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.548059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.548092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.548243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.548267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.548445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.548470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.548660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.548696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.548874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.548899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.549059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.549084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.719 [2024-07-15 16:33:03.549252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.719 [2024-07-15 16:33:03.549276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.719 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.549405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.549429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.549606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.549644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.549796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.549821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.550852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.550993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.551262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.551403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.551586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.551752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.551935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.551961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.552094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.552122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.552330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.552358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.552463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.552487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.552613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.552637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.552807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.552833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.553053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.553247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.553474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.553689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.553882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.553993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.554168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.554403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.554592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.554749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.554942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.554983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.720 [2024-07-15 16:33:03.555140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.720 [2024-07-15 16:33:03.555168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.720 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.555381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.555409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.555579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.555602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.555747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.555773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.555903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.555932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.556132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.556173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.556342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.556366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.556540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.556574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.556763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.556806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.556952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.556977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.557889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.557915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.558955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.558980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.559137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.559159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.559328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.559371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.559533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.559561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.559783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.559809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.559964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.559989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.560185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.560250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.560368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.560396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.560612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.560645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.560779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.560803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.561075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.561135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.561272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.561301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.561457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.721 [2024-07-15 16:33:03.561485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.721 qpair failed and we were unable to recover it. 00:34:20.721 [2024-07-15 16:33:03.561663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.561686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.561894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.561919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.562970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.562995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.563918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.563944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.564119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.564147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.564352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.564379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.564531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.564554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.564788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.564828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.564951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.564975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.565159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.565187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.565401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.565424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.565588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.565616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.565769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.565810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.565942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.565967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.566089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.566113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.566351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.566402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.566566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.566594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.566722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.566755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.566882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.566908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.567960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.567986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.568124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.568152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.568287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.568311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.568450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.568475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.722 qpair failed and we were unable to recover it. 00:34:20.722 [2024-07-15 16:33:03.568648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.722 [2024-07-15 16:33:03.568680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.568815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.568844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.568945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.568971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.569095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.569134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.569257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.569285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.569430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.569459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.569569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.569611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.569796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.569832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.570947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.570972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.571189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.571213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.571366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.571405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.571576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.571605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.571743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.571772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.571919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.571944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.572135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.572207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.572376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.572404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.572565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.572594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.572706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.572730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.572922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.572976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.573149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.573176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.573326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.573351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.573476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.573512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.573646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.573675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.573841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.573867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.574917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.574942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.575074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.575099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.575276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.575304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.575439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.575467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.575635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.575673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.575824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.575867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.576031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.576059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.576211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.576239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.576383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.576430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.576589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.576612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.576835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.576864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.577022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.723 [2024-07-15 16:33:03.577050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.723 qpair failed and we were unable to recover it. 00:34:20.723 [2024-07-15 16:33:03.577187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.577223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.577464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.577492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.577619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.577654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.577814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.577842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.578076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.578115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.578271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.578323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.578471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.578499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.578654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.578682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.578840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.578881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.579859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.579995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.580021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.580230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.580254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.580415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.580442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.580577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.580605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.580823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.580852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.580984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.581023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.581173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.581211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.581408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.581448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.581594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.581632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.581799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.581825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.582909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.582937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.583101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.583234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.583379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.583606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.583781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.583980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.584006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.584184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.584241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.584381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.584409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.584583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.724 [2024-07-15 16:33:03.584611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.724 qpair failed and we were unable to recover it. 00:34:20.724 [2024-07-15 16:33:03.584749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.584790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.584912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.584939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.585091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.585119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.585247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.585275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.585509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.585533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.585716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.585751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.585922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.585961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.586098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.586126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.586255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.586281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.586505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.586542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.586649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.586678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.586854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.586881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.587949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.587991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.588108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.588135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.588296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.588324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.588462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.588500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.588627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.588666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.588834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.588862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.589001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.589030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.589226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.589259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.589440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.589468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.589637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.589674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.589777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.589807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.590004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.590029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.590208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.590252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.590403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.590431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.590622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.590650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.590780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.590805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.591080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.591156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.591321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.591358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.591503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.591531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.591740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.591784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.591891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.591920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.592885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.592913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.593067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.593090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.593202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.725 [2024-07-15 16:33:03.593226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.725 qpair failed and we were unable to recover it. 00:34:20.725 [2024-07-15 16:33:03.593378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.593406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.593607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.593635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.593835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.593860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.594919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.594944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.595865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.595890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.596890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.596918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.597879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.597905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.598936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.598961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.599927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.599969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.600929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.600957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.601141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.601165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.601320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.601348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.601459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.726 [2024-07-15 16:33:03.601487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.726 qpair failed and we were unable to recover it. 00:34:20.726 [2024-07-15 16:33:03.601648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.601677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.601802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.601828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.601962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.601987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.602909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.602937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.603966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.603995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.604905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.604930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.605847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.605994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.606946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.606971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.607902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.607930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.608059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.608088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.608221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.608260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.608382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.727 [2024-07-15 16:33:03.608407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.727 qpair failed and we were unable to recover it. 00:34:20.727 [2024-07-15 16:33:03.608538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.608566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.608724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.608758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.608902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.608927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.609903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.609931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.610877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.610902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.611931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.611960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.612952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.612977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.613953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.613980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.614891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.614989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.615824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.615978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.616112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.616260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.616399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.616557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.728 [2024-07-15 16:33:03.616744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.728 [2024-07-15 16:33:03.616770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.728 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.616872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.616898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.617853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.617882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.618875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.618901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.619971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.619996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.620933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.620961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.621900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.621925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.622959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.622994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.623904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.623929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.729 [2024-07-15 16:33:03.624812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.729 [2024-07-15 16:33:03.624856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.729 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.624986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.625904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.625944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.626906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.626934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.627882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.627907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.628855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.628993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.629190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.629351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.629518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.629670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.629861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.629904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.630830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.630859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.631832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.631874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.632861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.632889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.633044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.633069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.730 qpair failed and we were unable to recover it. 00:34:20.730 [2024-07-15 16:33:03.633211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.730 [2024-07-15 16:33:03.633235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.633351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.633379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.633485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.633513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.633616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.633640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.633794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.633819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.633953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.633981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.634927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.634953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.635884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.635912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.636873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.636899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.637900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.637928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.638952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.639970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.639998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.640131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.640159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.640264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.640287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.640447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.640472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.640595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.640623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.731 qpair failed and we were unable to recover it. 00:34:20.731 [2024-07-15 16:33:03.640741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.731 [2024-07-15 16:33:03.640784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.640941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.640981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.641148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.641176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.641308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.641336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.641498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.641526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.641639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.641679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.641842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.641883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.642934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.642964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.643938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.643967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.644878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.644978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.645943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.645972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.646117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.646157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.646338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.646361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.646502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.646530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.646683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.646711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.646891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.646932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.647879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.647904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:20.732 [2024-07-15 16:33:03.648903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.732 [2024-07-15 16:33:03.648929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:20.732 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.649899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.649928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.650063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.650092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.650234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.650259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.650398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.650440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.014 [2024-07-15 16:33:03.650573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.014 [2024-07-15 16:33:03.650601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.014 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.650735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.650770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.650938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.650964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.651944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.651973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.652912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.652937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.653093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.653134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.653296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.653324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.653485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.653513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.653683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.653711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.653835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.653862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.654923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.654947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.655964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.655990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.656169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.656357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.656514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.656670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.656843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.656978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.657006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.657121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.657145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.657316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.657356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.657463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.015 [2024-07-15 16:33:03.657491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.015 qpair failed and we were unable to recover it. 00:34:21.015 [2024-07-15 16:33:03.657590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.657618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.657810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.657990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.658920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.658952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.659864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.659905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.660885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.660911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.661960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.661988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.662897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.662922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.016 [2024-07-15 16:33:03.663866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.016 [2024-07-15 16:33:03.663894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.016 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.664885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.664923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.665934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.665961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.666909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.666935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.667929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.667958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.668859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.668884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.669860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.669988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.670013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.670157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.670181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.670279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.670302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.670451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.670479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.017 [2024-07-15 16:33:03.670622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.017 [2024-07-15 16:33:03.670650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.017 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.670815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.670841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.670943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.670968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.671109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.671137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.671334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.671362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.671515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.671538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.671650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.671674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.671830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.671858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.672928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.672953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.673965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.673991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.674105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.674129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.674318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.674361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.674505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.674537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.674697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.674725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.674868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.674893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.675062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.675086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.675234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.675261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.675367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.675395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.675555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.675583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.675704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.675732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.676959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.676998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.677169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.677239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.677376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.018 [2024-07-15 16:33:03.677404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.018 qpair failed and we were unable to recover it. 00:34:21.018 [2024-07-15 16:33:03.677514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.677541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.677680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.677703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.677850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.677892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.678850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.678878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.679953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.679995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.680900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.680928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.681972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.681997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.682965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.019 [2024-07-15 16:33:03.682990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.019 qpair failed and we were unable to recover it. 00:34:21.019 [2024-07-15 16:33:03.683126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.683150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.683274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.683314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.683422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.683450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.683592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.683620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.683788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.683816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.683997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.684160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.684351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.684538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.684676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.684854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.684879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.685909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.685934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.686092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.686116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.686294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.686322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.686464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.686492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.686674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.686697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.686880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.686909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.687892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.687920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.688879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.688920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.689859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.689887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.020 [2024-07-15 16:33:03.690016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.020 [2024-07-15 16:33:03.690056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.020 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.690210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.690387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.690518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.690675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.690876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.690992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.691182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.691371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.691525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.691700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.691865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.691891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.692885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.692913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.693891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.693917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.694962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.694990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.695870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.695895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.696016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.696044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.696161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.696189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.696323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.696348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.696456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.021 [2024-07-15 16:33:03.696481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.021 qpair failed and we were unable to recover it. 00:34:21.021 [2024-07-15 16:33:03.696589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.696617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.696751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.696780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.696894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.696919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.697897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.697922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.698915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.698941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.699907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.699935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.700843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.700869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.701883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.701911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.702840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.702866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.022 [2024-07-15 16:33:03.703053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.022 qpair failed and we were unable to recover it. 00:34:21.022 [2024-07-15 16:33:03.703186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.703214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.703325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.703350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.703461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.703486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.703650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.703678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.703811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.703837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.703989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.704825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.704984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.705881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.705907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.706939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.706967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.707957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.707982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.708094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.708119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.708234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.708262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.708380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.708408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.708574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.708600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.023 qpair failed and we were unable to recover it. 00:34:21.023 [2024-07-15 16:33:03.708729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.023 [2024-07-15 16:33:03.708778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.708875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.708903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.709837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.709994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.710157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.710323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.710493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.710700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.710890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.710916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.711912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.711938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.712866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.712894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.713888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.713913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.714946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.714971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.715071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.715096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.024 qpair failed and we were unable to recover it. 00:34:21.024 [2024-07-15 16:33:03.715204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.024 [2024-07-15 16:33:03.715232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.715341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.715369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.715484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.715523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.715627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.715651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.715828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.715857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.716890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.716915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.717067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.717091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.717296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.717323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.717440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.717468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.717632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.717656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.717860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.717889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.718105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.718243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.718424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.718631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.718819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.718974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.719150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.719388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.719563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.719702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.719882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.719907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.720063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.720105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.720280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.720308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.720488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.720515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.720655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.720679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.720876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.720905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.721081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.721232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.721404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.721583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.721816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.721981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.722009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.722178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.722204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.722364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.722392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.722555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.722583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.025 qpair failed and we were unable to recover it. 00:34:21.025 [2024-07-15 16:33:03.722704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.025 [2024-07-15 16:33:03.722732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.722938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.722963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.723085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.723116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.723284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.723312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.723466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.723493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.723674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.723701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.723863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.723889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.724908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.724933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.725092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.725116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.725264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.725292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.725452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.725487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.725670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.725698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.725877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.725903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.726058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.726204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.726357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.726560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.726834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.726990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.727018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.727130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.727158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.727370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.727394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.727541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.727569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.727747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.727776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.727986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.728014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.728156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.728181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.728393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.728446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.728577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.728606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.728790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.728819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.728984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.729178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.729400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.729577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.729734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.729932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.729975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.730135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.730164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.730294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.730322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.026 [2024-07-15 16:33:03.730494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.026 [2024-07-15 16:33:03.730518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.026 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.730629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.730679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.730842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.730869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.731055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.731080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.731272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.731295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.731461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.731501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.731701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.731729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.731896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.731924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.732127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.732151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.732365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.732422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.732562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.732590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.732795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.732824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.732939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.732962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.733098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.733122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.733353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.733381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.733565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.733593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.733779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.733804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.734038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.734224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.734433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.734650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.734835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.734999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.735027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.735208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.735248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.735413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.735437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.735604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.735632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.735813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.735842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.736941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.736965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.737192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.737220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.737326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.737354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.027 qpair failed and we were unable to recover it. 00:34:21.027 [2024-07-15 16:33:03.737513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.027 [2024-07-15 16:33:03.737548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.737747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.737804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.737954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.737979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.738180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.738208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.738362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.738390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.738561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.738589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.738796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.738822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.739955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.739981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.740191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.740215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.740437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.740488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.740689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.740718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.740907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.740935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.741147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.741170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.741312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.741373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.741549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.741576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.741788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.741816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.741954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.741979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.742137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.742174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.742347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.742375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.742536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.742564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.742750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.742776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.742921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.742962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.743184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.743212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.743353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.743381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.743562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.743585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.743762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.743803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.743996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.744024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.744180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.744208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.744398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.744422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.744605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.744633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.744803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.744831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.745001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.745029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.745221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.745244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.745408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.745449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.745645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.745674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.745821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.745849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.028 qpair failed and we were unable to recover it. 00:34:21.028 [2024-07-15 16:33:03.746054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.028 [2024-07-15 16:33:03.746077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.746282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.746343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.746538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.746566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.746716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.746750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.746925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.746950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.747175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.747227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.747339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.747378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.747586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.747614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.747721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.747760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.747919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.747943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.748151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.748179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.748317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.748345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.748484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.748523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.748649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.748688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.748901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.748927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.749108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.749136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.749314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.749338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.749586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.749614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.749780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.749818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.749986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.750022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.750185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.750208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.750422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.750485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.750648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.750676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.750845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.750873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.751107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.751130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.751310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.751360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.751497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.751525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.751669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.751708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.751906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.751943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.752096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.752124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.752286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.752325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.752486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.752514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.752690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.752714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.752906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.752934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.753098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.753126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.753291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.753320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.753531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.753554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.753693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.753721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.753898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.753927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.754095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.754124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.754284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.754308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.029 [2024-07-15 16:33:03.754501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.029 [2024-07-15 16:33:03.754558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.029 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.754750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.754779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.754902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.754931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.755122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.755171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.755388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.755437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.755640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.755668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.755877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.755906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.756070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.756093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.756287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.756342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.756523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.756551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.756712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.756746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.756963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.756987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.757186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.757236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.757442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.757471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.757604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.757636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.757799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.757832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.758063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.758124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.758300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.758339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.758489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.758517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.758661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.758699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.758928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.758957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.759965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.759993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.760189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.760212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.760397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.760432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.760591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.760619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.760771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.760800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.760917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.760940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.761088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.761112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.761288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.761327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.761491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.761519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.761722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.761764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.761943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.761972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.762172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.762360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.762388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.762548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.762572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.762765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.762806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.030 [2024-07-15 16:33:03.762983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.030 [2024-07-15 16:33:03.763015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.030 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.763184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.763212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.763429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.763452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.763603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.763632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.763798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.763823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.763995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.764044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.764189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.764227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.764363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.764401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.764573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.764602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.764805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.764834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.765004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.765028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.765198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.765226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.765383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.765411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.765576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.765604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.765828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.765853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.766046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.766235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.766451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.766649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.766865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.766988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.767185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.767330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.767521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.767660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.767866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.767895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.768089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.768112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.768282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.768332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.768507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.768535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.768667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.768707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.768839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.768863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.769028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.769069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.769233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.769260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.769426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.769465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.769624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.769647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.769889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.769918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.770065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.770094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.770272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.770307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.770466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.770490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.770715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.770881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.770906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.031 qpair failed and we were unable to recover it. 00:34:21.031 [2024-07-15 16:33:03.771138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.031 [2024-07-15 16:33:03.771167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.771311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.771346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.771569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.771597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.771773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.771802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.771975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.772012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.772183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.772207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.772418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.772480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.772651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.772688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.772862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.772891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.773042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.773079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.773276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.773331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.773453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.773482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.773664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.773692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.773844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.773869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.774047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.774073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.774305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.774333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.774468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.774496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.774705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.774750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.774938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.774966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.775176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.775204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.775347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.775374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.775508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.775532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.775795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.775824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.776958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.776986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.777169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.777193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.777434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.777483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.777640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.777668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.777806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.777835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.778034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.778058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.778268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.778317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.778459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.032 [2024-07-15 16:33:03.778487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.032 qpair failed and we were unable to recover it. 00:34:21.032 [2024-07-15 16:33:03.778634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.778663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.778844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.778884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.779050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.779090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.779262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.779291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.779477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.779506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.779682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.779711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.779879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.779915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.780088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.780117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.780275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.780303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.780483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.780517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.780736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.780785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.780958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.780983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.781168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.781197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.781390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.781413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.781588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.781615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.781778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.781807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.781944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.781972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.782147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.782170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.782357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.782431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.782598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.782627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.782785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.782814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.782983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.783008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.783140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.783180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.783327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.783355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.783514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.783543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.783749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.783788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.784011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.784050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.784215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.784244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.784415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.784443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.784606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.784629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.784790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.784831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.785006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.785034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.785186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.785214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.785408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.785431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.785605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.785642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.785820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.785850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.786064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.786092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.786244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.786267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.786517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.786565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.786709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.786745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.786928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.033 [2024-07-15 16:33:03.786957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.033 qpair failed and we were unable to recover it. 00:34:21.033 [2024-07-15 16:33:03.787123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.787156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.787376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.787432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.787629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.787658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.787836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.787865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.788052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.788077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.788204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.788244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.788420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.788459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.788674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.788703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.788854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.788879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.789024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.789049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.789237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.789271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.789470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.789498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.789668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.789697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.789900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.789927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.790135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.790164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.790359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.790388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.790568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.790597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.790795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.790819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.791054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.791082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.791246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.791274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.791481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.791505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.791715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.791749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.791988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.792014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.792197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.792225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.792442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.792465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.792635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.792662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.792785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.792814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.792989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.793017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.793189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.793212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.793430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.793479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.793690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.793718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.793910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.793938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.794048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.794082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.794255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.794295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.794484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.794512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.794690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.794718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.794930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.794954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.795189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.795239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.795418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.795446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.795645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.795673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.795884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.034 [2024-07-15 16:33:03.795910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.034 qpair failed and we were unable to recover it. 00:34:21.034 [2024-07-15 16:33:03.796147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.796199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.796422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.796450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.796627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.796655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.796858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.796882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.797026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.797090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.797295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.797324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.797504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.797532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.797749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.797773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.797991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.798019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.798220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.798248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.798425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.798453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.798621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.798644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.798840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.798869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.799086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.799115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.799260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.799288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.799422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.799460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.799658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.799682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.799916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.799942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.800181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.800210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.800341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.800364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.800642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.800670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.800890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.800920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.801143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.801171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.801374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.801398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.801606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.801635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.801852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.801882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.802059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.802087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.802257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.802280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.802514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.802575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.802707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.802735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.802962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.802990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.803148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.803175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.803320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.803372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.803552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.803589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.803828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.803858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.804120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.804145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.804418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.804447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.804624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.804652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.804801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.804831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.805043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.035 [2024-07-15 16:33:03.805083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.035 qpair failed and we were unable to recover it. 00:34:21.035 [2024-07-15 16:33:03.805348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.805399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.805620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.805648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.805837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.805866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.806082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.806105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.806315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.806364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.806547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.806576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.806753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.806782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.807000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.807040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.807211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.807262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.807392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.807420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.807573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.807601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.807773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.807799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.808004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.808045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.808231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.808259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.808388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.808416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.808593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.808620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.808798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.808824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.809007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.809048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.809189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.809235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.809368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.809407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.809628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.809656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.809836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.809865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.810084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.810123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.810306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.810328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.810518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.810546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.810688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.810715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.810886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.810915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.811102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.811141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.811336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.811385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.811610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.811639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.811822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.811851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.812050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.812074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.812327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.812377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.036 qpair failed and we were unable to recover it. 00:34:21.036 [2024-07-15 16:33:03.812546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.036 [2024-07-15 16:33:03.812575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.812785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.812814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.813035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.813059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.813200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.813259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.813492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.813520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.813661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.813689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.813896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.813921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.814161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.814213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.814403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.814432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.814629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.814657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.814818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.814852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.815077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.815128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.815345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.815373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.815627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.815656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.815836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.815862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.816062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.816090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.816266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.816295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.816479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.816507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.816706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.816735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.816957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.816983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.817120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.817173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.817329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.817357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.817610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.817634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.817832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.817870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.818060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.818088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.818305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.818333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.818548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.818572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.818777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.818806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.818942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.818976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.819122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.819150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.819333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.819357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.819544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.819629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.819842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.819871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.820047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.820075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.820237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.820260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.820435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.820464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.820643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.820671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.820856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.820885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.821068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.821091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.821295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.821344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.037 [2024-07-15 16:33:03.821512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.037 [2024-07-15 16:33:03.821542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.037 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.821715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.821752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.821935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.821960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.822106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.822170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.822404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.822432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.822568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.822597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.822824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.822848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.823027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.823078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.823209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.823237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.823422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.823450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.823583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.823621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.823812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.823841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.824008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.824043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.824227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.824260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.824432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.824456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.824633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.824661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.824851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.824881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.825019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.825048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.825241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.825264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.825502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.825552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.825743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.825771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.825992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.826020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.826182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.826205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.826458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.826507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.826702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.826748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.826986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.827027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.827171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.827195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.827360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.827400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.827577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.827606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.827821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.827848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.828044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.828068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.828294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.828322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.828471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.828499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.828655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.828683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.828833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.828859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.829033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.829062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.829231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.829266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.829482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.829510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.829680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.829704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.829898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.829927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.830105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.038 [2024-07-15 16:33:03.830138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.038 qpair failed and we were unable to recover it. 00:34:21.038 [2024-07-15 16:33:03.830316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.830346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.830505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.830529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.830779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.830844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.831077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.831105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.831305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.831333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.831554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.831578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.831771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.831816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.832042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.832070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.832243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.832271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.832437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.832461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.832694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.832722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.832953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.832982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.833159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.833188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.833393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.833417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.833631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.833660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.833803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.833831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.834005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.834032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.834211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.834236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.834502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.834559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.834790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.834818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.835004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.835033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.835198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.835222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.835384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.835451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.835676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.835705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.835887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.835916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.836081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.836104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.836351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.836403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.836594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.836623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.836855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.836884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.837061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.837085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.837271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.837321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.837508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.837536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.837751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.837780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.838005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.838043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.838264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.838311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.838465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.838494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.838714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.838747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.838957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.838983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.839139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.839200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.839353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.839382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.039 [2024-07-15 16:33:03.839576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.039 [2024-07-15 16:33:03.839632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.039 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.839775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.839800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.839997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.840026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.840208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.840237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.840451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.840479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.840652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.840675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.840860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.840890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.841102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.841130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.841352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.841380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.841638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.841662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.841833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.841863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.842084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.842113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.842280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.842308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.842473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.842497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.842688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.842718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.842947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.842976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.843103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.843131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.843306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.843344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.843546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.843596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.843824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.843854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.844026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.844055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.844231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.844255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.844491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.844769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.844798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.844982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.845010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.845194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.845217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.845394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.845451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.845606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.845638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.845808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.845834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.845998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.846038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.846254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.846314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.846490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.846519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.846752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.846780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.846958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.846983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.040 [2024-07-15 16:33:03.847205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.040 [2024-07-15 16:33:03.847234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.040 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.847413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.847441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.847595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.847624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.847800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.847826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.848058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.848086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.848255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.848284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.848496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.848525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.848667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.848713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.848955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.848984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.849150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.849184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.849409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.849437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.849618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.849643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.849851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.849880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.850057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.850086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.850266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.850295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.850470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.850495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.850689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.850717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.850891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.850917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.851106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.851135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.851315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.851339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.851532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.851561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.851753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.851782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.851994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.852022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.852201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.852225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.852432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.852482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.852734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.852786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.852963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.852992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.853176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.853201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.853386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.853439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.853698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.853727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.853995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.854024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.854242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.854267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.854444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.854532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.854709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.854745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.854974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.855004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.855208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.855233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.855413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.855480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.855696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.855725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.855883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.855912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.856112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.856137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.856336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.041 [2024-07-15 16:33:03.856399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.041 qpair failed and we were unable to recover it. 00:34:21.041 [2024-07-15 16:33:03.856571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.856599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.856821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.856851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.857046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.857071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.857312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.857362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.857514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.857543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.857718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.857753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.857940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.857970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.858163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.858221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.858368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.858397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.858564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.858592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.858782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.858830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.858974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.859000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.859197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.859225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.859370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.859398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.859562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.859591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.859782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.859827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.859983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.860009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.860192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.860221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.860380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.860401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.860564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.860592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.860753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.860795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.860987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.861016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.861201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.861225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.861416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.861467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.861634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.861662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.861884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.861914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.862073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.862096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.862246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.862272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.862425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.862453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.862626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.862654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.862870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.862896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.863047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.863076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.863266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.863295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.863470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.863499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.863675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.863700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.863930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.863959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.864157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.864185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.864373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.864402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.864611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.864636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.864879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.864908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.865069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.042 [2024-07-15 16:33:03.865098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.042 qpair failed and we were unable to recover it. 00:34:21.042 [2024-07-15 16:33:03.865275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.865303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.865515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.865539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.865703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.865732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.865924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.865953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.866101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.866309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.866332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.866525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.866591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.866766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.866796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.867007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.867035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.867196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.867220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.867404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.867460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.867640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.867669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.867853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.867878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.868054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.868079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.868319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.868379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.868568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.868596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.868807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.868837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.869016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.869058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.869244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.869295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.869534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.869563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.869789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.869818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.869955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.869981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.870201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.870254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.870421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.870450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.870643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.870671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.870882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.870908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.871049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.871077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.871289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.871318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.871454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.871482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.871674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.871699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.871925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.871954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.872129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.872157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.872332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.872361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.872583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.872611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.872816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.872845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.872988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.873017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.873231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.873260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.873440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.873466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.873657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.873686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.873830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.043 [2024-07-15 16:33:03.873857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.043 qpair failed and we were unable to recover it. 00:34:21.043 [2024-07-15 16:33:03.874034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.874076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.874264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.874304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.874494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.874543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.874719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.874754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.874905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.874934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.875114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.875139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.875373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.875428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.875628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.875657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.875872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.876051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.876076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.876326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.876380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.876612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.876640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.876817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.876846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.877034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.877073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.877274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.877325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.877490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.877518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.877662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.877691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.877920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.877946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.878192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.878247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.878476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.878504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.878754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.878792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.878956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.878982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.879179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.879231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.879420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.879448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.879639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.879667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.879896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.879921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.880115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.880179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.880320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.880349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.880532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.880560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.880788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.880814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.881010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.881039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.881217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.881246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.881422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.881451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.881616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.881644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.881848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.881874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.882095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.882123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.882307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.882336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.882509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.882534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.882753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.882782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.882954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.882978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.883161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.044 [2024-07-15 16:33:03.883190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.044 qpair failed and we were unable to recover it. 00:34:21.044 [2024-07-15 16:33:03.883415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.883441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.883637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.883666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.883848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.883877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.884028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.884057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.884241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.884280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.884487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.884539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.884723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.884758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.884908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.884937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.885094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.885120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.885319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.885368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.885531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.885559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.885797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.885826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.886015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.886041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.886262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.886313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.886488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.886517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.886653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.886681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.886847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.886873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.887060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.887089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.887274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.887303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.887493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.887522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.887709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.887734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.887902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.887941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.888127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.888156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.888336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.888364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.888540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.888564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.888723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.888773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.888983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.889012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.889151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.889177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.889339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.889378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.889643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.889689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.889897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.889924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.890138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.890166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.890391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.890415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.890628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.890657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.045 [2024-07-15 16:33:03.890824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.045 [2024-07-15 16:33:03.890853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.045 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.891048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.891076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.891241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.891265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.891391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.891432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.891565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.891594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.891823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.891852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.892005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.892046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.892270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.892298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.892481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.892510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.892749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.892778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.892963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.892989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.893185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.893238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.893402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.893430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.893647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.893680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.893907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.893932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.894170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.894221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.894406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.894435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.894589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.894618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.894798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.894825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.894980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.895008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.895122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.895150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.895306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.895334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.895638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.895665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.895838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.895867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.896047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.896075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.896265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.896293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.896486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.896527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.896723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.896759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.896944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.896972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.897118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.897146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.897310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.897335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.897485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.897510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.897673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.897701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.897839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.897865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.898910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.898952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.899130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.899163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.046 [2024-07-15 16:33:03.899284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.046 [2024-07-15 16:33:03.899312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.046 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.899456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.899481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.899678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.899720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.899878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.899906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.900911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.900936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.901062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.901261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.901455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.901628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.901830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.901979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.902162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.902359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.902559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.902732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.902899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.902927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.903907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.903936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.904118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.904146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.904332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.904361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.904612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.904637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.904833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.904860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.904971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.904996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.905185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.905214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.905405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.905430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.905664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.905692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.905875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.905901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.906067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.906092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.906234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.906259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.906498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.906550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.906744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.906773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.906926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.906954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.047 [2024-07-15 16:33:03.907134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.047 [2024-07-15 16:33:03.907159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.047 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.907383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.907428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.907605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.907633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.907858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.907887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.908069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.908093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.908307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.908358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.908539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.908568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.908754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.908794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.908957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.908982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.909194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.909247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.909444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.909471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.909591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.909620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.909756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.909782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.909930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.909973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.910207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.910236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.910432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.910460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.910639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.910664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.910816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.910845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.910999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.911027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.911140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.911168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.911389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.911413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.911629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.911657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.911831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.911860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.912012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.912040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.912205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.912230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.912457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.912506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.912707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.912746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.912955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.912984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.913182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.913207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.913408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.913459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.913655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.913684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.913856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.913880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.914040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.914065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.914198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.914273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.914528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.914557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.914785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.914815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.048 [2024-07-15 16:33:03.915944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.048 [2024-07-15 16:33:03.915972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.048 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.916162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.916190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.916331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.916360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.916551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.916576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.916782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.916811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.917943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.917971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.918139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.918347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.918501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.918678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.918866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.918998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.919039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.919186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.919226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.919408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.919436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.919654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.919679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.919847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.919874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.920064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.920093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.920286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.920314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.920477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.920502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.920685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.920713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.920905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.920931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.921170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.921199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.921411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.921436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.921667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.921696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.921901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.921927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.922126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.922154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.922338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.922362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.922537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.922590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.922766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.922794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.922950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.922978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.923105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.049 [2024-07-15 16:33:03.923144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.049 qpair failed and we were unable to recover it. 00:34:21.049 [2024-07-15 16:33:03.923275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.923300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.923492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.923521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.923644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.923672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.923834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.923864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.924039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.924067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.924286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.924480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.924508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.924667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.924691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.924863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.924892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.925029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.925057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.925236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.925265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.925418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.925458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.925581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.925621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.925797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.925826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.926034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.926063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.926232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.926257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.926471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.926499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.926703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.926731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.926894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.926922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.927185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.927210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.927401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.927461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.927650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.927679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.927823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.927852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.928073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.928292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.928464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.928672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.928837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.928993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.929047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.929238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.929266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.929487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.929516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.929726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.929761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.929955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.929981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.930183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.930211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.930326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.930355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.930537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.930566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.930728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.930772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.931017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.931043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.931237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.931266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.931485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.931509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.931690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.050 [2024-07-15 16:33:03.931719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.050 qpair failed and we were unable to recover it. 00:34:21.050 [2024-07-15 16:33:03.931911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.931940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.932091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.932120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.932353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.932378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.932603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.932651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.932839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.932868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.933018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.933047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.933205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.933229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.933358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.933384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.933620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.933648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.933840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.933870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.934110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.934135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.934342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.934371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.934552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.934580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.934765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.934794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.934907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.934933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.935076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.935100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.935271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.935299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.935449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.935478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.935639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.935664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.935807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.935848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.936915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.936943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.937072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.937120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.937340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.937369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.937542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.937570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.937730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.937764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.937953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.937983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.938217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.938245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.938380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.938408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.938596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.938625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.938806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.938832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.939025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.939051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.939284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.939313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.939463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.939491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.939706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.939734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.939997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.940038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.940219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.051 [2024-07-15 16:33:03.940258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.051 qpair failed and we were unable to recover it. 00:34:21.051 [2024-07-15 16:33:03.940418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.940445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.940630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.940659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.940816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.940843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.941035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.941064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.941298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.941326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.941531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.941584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.941751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.941797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.941998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.942023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.942177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.942205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.942429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.942454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.942607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.942636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.942817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.942846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.943069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.943098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.943285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.943310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.943548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.943577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.943771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.943800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.944018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.944052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.944224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.944248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.944469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.944520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.944749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.944779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.944947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.944976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.945153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.945177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.945364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.945420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.945603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.945632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.945774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.945803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.946017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.946056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.946200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.946264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.946437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.946465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.946679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.946708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.946909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.946936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.947106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.947157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.947382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.947411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.947607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.947636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.947900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.947926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.948130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.948187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.948388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.948417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.948598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.948626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.948833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.948859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.949006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.949034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.949180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.949209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.949439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.949467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.052 [2024-07-15 16:33:03.949636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.052 [2024-07-15 16:33:03.949663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.052 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.949876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.949903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.950129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.950163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.950325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.950354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.950529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.950566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.950752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.950795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.950921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.950946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.951110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.951139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.951358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.951383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.951570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.951618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.951811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.951840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.952022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.952050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.952274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.952299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.952496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.952547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.952751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.952780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.952950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.952978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.953217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.953241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.953461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.953513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.953666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.953695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.953873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.953903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.954035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.954075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.954295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.954350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.954548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.954577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.954800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.954830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.955030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.955069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.955253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.955304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.955495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.955523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.955745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.955775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.955957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.955981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.956224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.956273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.956412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.956452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.956604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.956633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.956828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.956869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.957117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.957168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.957322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.957349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.957584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.053 [2024-07-15 16:33:03.957613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.053 qpair failed and we were unable to recover it. 00:34:21.053 [2024-07-15 16:33:03.957840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.957875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.958070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.958100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.958289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.958318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.958507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.958535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.958707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.958735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.958930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.958956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.959138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.959166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.959337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.959369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.959557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.959582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.959802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.959829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.960014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.960055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.960280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.960308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.960511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.960535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.960754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.960783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.961016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.961045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.961247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.961276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.961460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.961485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.961658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.961687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.961899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.961935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.962109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.962133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.962280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.962304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.962446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.962470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.962674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.962703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.962906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.962935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.963108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.963132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.963322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.963380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.963563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.963592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.963744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.963771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.963998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.964023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.964228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.964278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.964454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.964482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.964642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.964832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.964859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.965044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.965073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.965298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.965331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.965504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.965532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.965740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.965766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.965965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.965995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.966227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.966255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.966432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.966461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.966673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.054 [2024-07-15 16:33:03.966699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.054 qpair failed and we were unable to recover it. 00:34:21.054 [2024-07-15 16:33:03.966846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.966875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.967013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.967042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.967186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.967215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.967405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.967429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.967618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.967647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.967828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.967858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.968038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.968068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.968230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.968255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.968489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.968537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.968729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.968765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.968989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.969018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.969225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.969249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.969469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.969517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.969668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.969708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.969945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.969972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.970165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.970190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.970410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.970455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.970641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.970670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.970858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.970888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.971065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.971090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.971296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.971328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.055 qpair failed and we were unable to recover it. 00:34:21.055 [2024-07-15 16:33:03.971532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.055 [2024-07-15 16:33:03.971561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.971763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.971793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.971989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.972015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.972236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.972288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.972444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.972473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.972689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.972718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.972905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.972930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.973127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.973194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.973384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.973413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.973598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.973626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.973849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.973876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.974042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.974071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.974261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.974289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.974478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.974507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.974654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.974678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.974890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.974920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.975122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.975152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.975273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.975301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.975483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.335 [2024-07-15 16:33:03.975509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.335 qpair failed and we were unable to recover it. 00:34:21.335 [2024-07-15 16:33:03.975691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.975720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.975887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.975916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.976093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.976122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.976268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.976291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.976507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.976536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.976758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.976788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.976932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.976973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.977172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.977203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.977400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.977428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.977648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.977677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.977922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.977948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.978168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.978192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.978469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.978520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.978748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.978777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.978952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.978981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.979170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.979193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.979405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.979457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.979709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.979754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.980018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.980048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.980202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.980226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.980499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.980549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.980754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.980784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.981039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.981068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.981285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.981309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.981585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.981636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.981797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.981827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.981982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.982009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.982198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.982222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.982455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.982506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.982692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.982721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.982918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.982947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.983175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.983199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.983402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.983453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.983679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.983707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.983957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.983987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.984154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.984178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.984434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.984488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.984686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.984714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.984951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.984980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.985182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.985206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.336 [2024-07-15 16:33:03.985410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.336 [2024-07-15 16:33:03.985459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.336 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.985661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.985690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.985887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.985916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.986077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.986101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.986312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.986363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.986587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.986615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.986840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.986866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.987051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.987075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.987270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.987358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.987589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.987618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.987803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.987832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.988036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.988061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.988292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.988343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.988546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.988575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.988801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.988830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.989001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.989039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.989288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.989336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.989557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.989586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.989800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.989829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.990010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.990049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.990202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.990262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.990485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.990514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.990712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.990748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.990903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.990928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.991140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.991200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.991388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.991418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.991607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.991635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.991856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.991881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.992124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.992176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.992364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.992393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.992618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.992647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.992794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.992820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.993011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.993040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.993257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.993286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.993434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.993463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.993654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.993688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.993945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.993975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.994111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.994140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.994317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.994346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.994603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.994627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.994816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.994841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.337 [2024-07-15 16:33:03.995103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.337 [2024-07-15 16:33:03.995132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.337 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.995351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.995380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.995632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.995661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.995910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.995951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.996139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.996176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.996338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.996374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.996633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.996662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.996844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.996868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.997126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.997155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.997376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.997405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.997646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.997670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.997834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.997862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.998024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.998065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.998322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.998351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.998579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.998603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.998761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.998791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.999014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.999043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.999265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.999294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.999531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.999555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:03.999763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:03.999798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.000007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.000036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.000271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.000300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.000532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.000556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.000799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.000829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.001012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.001041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.001266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.001295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.001526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.001549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.001788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.001813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.002056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.002085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.002275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.002304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.002479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.002503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.002712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.002747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.002898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.002927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.003104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.003132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.003323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.003347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.003539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.003590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.003812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.003842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.004061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.004090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.004267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.004292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.004455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.004509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.004641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.004670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.338 [2024-07-15 16:33:04.004844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.338 [2024-07-15 16:33:04.004870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.338 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.005092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.005116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.005332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.005382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.005569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.005598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.005816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.005847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.006053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.006078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.006257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.006318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.006536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.006564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.006762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.006792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.006952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.006974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.007181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.007233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.007420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.007449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.007639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.007667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.007867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.007894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.008096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.008157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.008302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.008331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.008509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.008538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.008700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.008724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.008964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.008993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.009166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.009195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.009378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.009407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.009573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.009603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.009831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.009861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.010079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.010107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.010328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.010357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.010579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.010602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.010891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.010944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.011139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.011180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.011442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.011470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.011651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.011681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.011861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.011890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.012104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.012132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.012322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.012361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.012541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.012564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.339 qpair failed and we were unable to recover it. 00:34:21.339 [2024-07-15 16:33:04.012807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.339 [2024-07-15 16:33:04.012832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.013003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.013045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.013236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.013265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.013469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.013492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.013630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.013658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.013888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.013914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.014121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.014150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.014397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.014421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.014614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.014643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.014781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.014811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.015049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.015078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.015302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.015326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.015548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.015598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.015818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.015847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.016065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.016098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.016290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.016315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.016558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.016607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.016825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.016855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.017050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.017079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.017270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.017294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.017507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.017558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.017723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.017756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.017949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.017978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.018162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.018186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.018360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.018413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.018571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.018600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.018829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.018858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.018997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.019038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.019251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.019302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.019502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.019531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.019758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.019787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.019997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.020022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.020226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.020277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.020452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.020481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.020679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.020708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.020962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.020988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.021180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.021233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.021426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.021455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.021640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.021669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.021891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.021917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.022090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.340 [2024-07-15 16:33:04.022132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.340 qpair failed and we were unable to recover it. 00:34:21.340 [2024-07-15 16:33:04.022325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.022358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.022559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.022588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.022809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.022834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.023035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.023064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.023262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.023291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.023516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.023545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.023791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.023816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.024046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.024107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.024305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.024333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.024567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.024596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.024786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.024811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.025008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.025037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.025232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.025261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.025444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.025472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.025705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.025729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.025947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.025976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.026172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.026201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.026424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.026452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.026635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.026659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.026911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.026940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.027103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.027131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.027308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.027337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.027498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.027522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.027758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.027801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.028053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.028082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.028272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.028301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.028535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.028559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.028801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.028827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.029043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.029072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.029254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.029283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.029478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.029502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.029756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.029786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.030019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.030048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.030203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.030232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.030398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.030422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.030640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.030668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.030856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.030886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.031081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.031110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.031290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.031314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.031565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.031614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.341 [2024-07-15 16:33:04.031764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.341 [2024-07-15 16:33:04.031794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.341 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.031960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.031989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.032181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.032205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.032457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.032506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.032733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.032767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.032999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.033028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.033194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.033217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.033426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.033511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.033708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.033743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.033978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.034007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.034214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.034238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.034493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.034542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.034726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.034762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.034959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.034988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.035173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.035196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.035454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.035504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.035746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.035777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.035973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.036002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.036175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.036197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.036383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.036439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.036633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.036661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.036851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.036878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.037018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.037042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.037253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.037304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.037481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.037509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.037748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.037777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.037941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.037964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.038215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.038265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.038408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.038440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.038634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.038662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.038872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.038898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.039130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.039189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.039376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.039405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.039633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.039662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.039865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.039891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.040029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.040058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.040209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.040238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.040465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.040493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.040722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.040893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.040918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.041129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.041158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.342 [2024-07-15 16:33:04.041292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.342 [2024-07-15 16:33:04.041319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.342 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.041525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.041549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.041733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.041768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.041996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.042025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.042257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.042285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.042519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.042543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.042745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.042775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.042964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.042993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.043175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.043203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.043430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.043454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.043628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.043657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.043915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.043944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.044171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.044200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.044400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.044425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.044601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.044635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.044848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.044874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.045117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.045146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.045331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.045355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.045511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.045571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.045818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.045848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.046010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.046039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.046275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.046299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.046515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.046566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.046753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.046783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.046985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.047014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.047205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.047229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.047437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.047490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.047719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.047754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.047990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.048019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.048195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.048219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.048462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.048512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.048710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.048753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.048934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.048964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.049161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.049185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.049350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.049401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.049613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.049641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.049790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.049818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.050014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.050038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.050298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.050352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.050543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.050572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.050783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.050812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.343 [2024-07-15 16:33:04.050998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.343 [2024-07-15 16:33:04.051022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.343 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.051242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.051292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.051489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.051518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.051718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.051752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.051983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.052008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.052270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.052319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.052551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.052580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.052777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.052807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.053044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.053067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.053320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.053370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.053532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.053570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.053780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.053809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.054010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.054034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.054236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.054286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.054486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.054515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.054706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.054735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.054982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.055007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.055198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.055249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.055482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.055511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.055742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.055771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.056007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.056032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.056229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.056280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.056478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.056507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.056692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.056720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.056916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.056942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.057136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.057185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.057387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.057416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.057614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.057665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.057912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.057938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.058177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.058228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.058389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.058417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.058630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.058660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.058854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.058880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.059097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.059156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.059417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.059446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.059663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.059691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.344 [2024-07-15 16:33:04.059897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.344 [2024-07-15 16:33:04.059923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.344 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.060132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.060161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.060296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.060325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.060514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.060543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.060775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.060799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.061005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.061060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.061260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.061289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.061519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.061548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.061762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.061786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.061975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.062027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.062167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.062195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.062384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.062412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.062585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.062607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.062803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.062833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.063024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.063053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.063245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.063273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.063502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.063525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.063697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.063726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.063928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.063957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.064151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.064180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.064392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.064415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.064656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.064684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.064922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.064948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.065148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.065177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.065363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.065387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.065542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.065570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.065795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.065824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.066056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.066085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.066315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.066339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.066542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.066592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.066786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.066816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.067001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.067029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.067252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.067280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.067506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.067557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.067808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.067838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.068098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.068127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.068339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.068364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.068621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.068673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.068898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.068928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.069141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.069170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.069337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.345 [2024-07-15 16:33:04.069359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.345 qpair failed and we were unable to recover it. 00:34:21.345 [2024-07-15 16:33:04.069608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.069674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.069835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.069863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.070105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.070134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.070331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.070355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.070557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.070609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.070802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.070832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.071067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.071096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.071288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.071313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.071516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.071544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.071777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.071807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.072033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.072062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.072288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.072312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.072495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.072544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.072701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.072730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.072934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.072963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.073134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.073156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.073368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.073420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.073647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.073675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.073865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.073895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.074125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.074148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.074409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.074460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.074622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.074650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.074842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.074872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.075108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.075132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.075344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.075395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.075645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.075674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.075862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.075891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.076119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.076143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.076388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.076437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.076668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.076696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.076932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.076962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.077153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.077177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.077374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.077425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.077593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.077622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.077800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.077830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.078052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.078075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.078350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.078399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.078625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.078653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.078853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.078894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.079089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.079124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.079296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.079350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.346 qpair failed and we were unable to recover it. 00:34:21.346 [2024-07-15 16:33:04.079606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.346 [2024-07-15 16:33:04.079634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.079884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.079914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.080100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.080135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.080399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.080428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.080683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.080712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.080999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.081028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.081212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.081235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.081465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.081515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.081665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.081692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.081942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.081972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.082193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.082218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.082408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.082470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.082679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.082708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.082861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.082887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.083112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.083136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.083379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.083428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.083684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.083713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.083951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.083980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.084181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.084204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.084419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.084448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.084679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.084708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.084951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.084981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.085182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.085215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.085437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.085488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.085677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.085705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.085942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.085971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.086185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.086209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.086431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.086482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.086658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.086687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.086924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.086954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.087149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.087174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.087421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.087471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.087618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.087647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.087839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.087869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.088064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.088103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.088355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.088405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.088677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.088705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.088885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.088914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.089114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.089146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.089378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.089429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.089616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.089645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.347 qpair failed and we were unable to recover it. 00:34:21.347 [2024-07-15 16:33:04.089823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.347 [2024-07-15 16:33:04.089852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.090051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.090076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.090334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.090382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.090596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.090626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.090859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.090893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.091131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.091155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.091397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.091448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.091686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.091715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.091993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.092020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.092313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.092337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.092577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.092627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.092835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.092865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.093055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.093094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.093326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.093351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.093591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.093640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.093842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.093872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.094065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.094094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.094306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.094329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.094590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.094639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.094835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.094865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.095054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.095082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.095263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.095286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.095530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.095582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.095747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.095775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.095970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.095999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.096203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.096228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.096392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.096442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.096634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.096662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.096812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.096842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.097051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.097076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.097319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.097381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.097537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.097581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.097809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.097839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.098051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.098076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.348 [2024-07-15 16:33:04.098296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.348 [2024-07-15 16:33:04.098348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.348 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.098554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.098583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.098735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.098792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.098999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.099025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.099224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.099275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.099457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.099486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.099717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.099751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.099987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.100013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.100227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.100279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.100464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.100493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.100689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.100719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.100952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.100992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.101168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.101232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.101432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.101460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.101615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.101642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.101797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.101823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.102019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.102048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.102256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.102284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.102441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.102470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.102683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.102708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.102955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.102984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.103223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.103252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.103448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.103477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.103644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.103682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.103882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.103912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.104098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.104127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.104292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.104321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.104518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.104554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.104764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.104805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.105009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.105038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.105231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.105259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.105497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.105521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.105687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.105716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.105901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.105928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.106137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.106165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.106303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.106327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.106497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.106539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.106747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.106776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.106988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.107017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.107250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.107274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.107532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.107584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.349 qpair failed and we were unable to recover it. 00:34:21.349 [2024-07-15 16:33:04.107750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.349 [2024-07-15 16:33:04.107779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.108006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.108035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.108234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.108259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.108494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.108542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.108696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.108725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.108947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.108976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.109156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.109194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.109413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.109463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.109716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.109752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.109928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.109957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.110159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.110183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.110389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.110438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.110664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.110693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.110957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.110987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.111264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.111287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.111564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.111614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.111849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.111880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.112119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.112147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.112330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.112354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.112521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.112577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.112772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.112802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.112966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.112995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.113194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.113217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.113462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.113514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.113704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.113742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.113949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.113977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.114194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.114218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.114435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.114484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.114728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.114954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.114980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.115138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.115177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.115391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.115442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.115595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.115624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.115803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.115833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.115968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.115994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.116202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.116232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.116434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.116463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.116653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.116682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.116929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.116955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.117198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.117247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.117441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.117470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.350 qpair failed and we were unable to recover it. 00:34:21.350 [2024-07-15 16:33:04.117632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.350 [2024-07-15 16:33:04.117673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.117924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.117950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.118173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.118224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.118456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.118484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.118681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.118710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.118873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.118900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.119174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.119228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.119420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.119458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.119665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.119694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.119914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.119940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.120156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.120213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.120438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.120467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.120697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.120725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.120921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.120946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.121116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.121145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.121377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.121406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.121600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.121629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.121824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.121850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.121989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.122017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.122275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.122304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.122530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.122559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.122777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.122803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.123011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.123039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.123185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.123214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.123442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.123471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.123655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.123679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.123872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.123902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.124105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.124133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.124285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.124313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.124546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.124570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.124735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.124779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.124967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.124997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.125133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.125161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.125362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.125387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.125600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.125629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.125859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.125885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.126108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.126137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.126301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.126328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.126551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.126602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.126793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.126823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.126959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.126986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.351 [2024-07-15 16:33:04.127187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.351 [2024-07-15 16:33:04.127211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.351 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.127427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.127479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.127675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.127703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.127935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.127964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.128201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.128226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.128433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.128482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.128704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.128733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.128938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.128966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.129215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.129239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.129449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.129501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.129764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.129806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.130005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.130034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.130241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.130265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.130492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.130541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.130718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.130752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.130913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.130942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.131195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.131220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.131471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.131521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.131708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.131742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.131901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.131929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.132100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.132124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.132365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.132426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.132664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.132692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.132940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.132970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.133201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.133226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.133434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.133487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.133678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.133716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.133932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.133961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.134212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.134235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.134497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.134549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.134749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.134793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.134974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.135000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.135236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.135262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.135455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.135514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.135667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.135694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.135905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.352 [2024-07-15 16:33:04.135934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.352 qpair failed and we were unable to recover it. 00:34:21.352 [2024-07-15 16:33:04.136100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.136123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.136299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.136366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.136561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.136589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.136819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.136849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.137062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.137086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.137287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.137316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.137500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.137529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.137727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.137762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.137918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.137943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.138152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.138205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.138376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.138405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.138561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.138588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.138835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.138860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.139062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.139091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.139322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.139351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.139532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.139570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.139798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.139824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.140003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.140032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.140264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.140293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.140492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.140521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.140714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.140769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.140935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.140964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.141203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.141231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.141459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.141487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.141650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.141674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.141860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.141901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.142178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.142208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.142484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.142513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.142683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.142717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.142992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.143039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.143280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.143308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.143527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.143556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.143757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.143802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.143993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.144019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.144199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.144227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.144475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.144503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.144734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.144767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.144959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.144985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.145187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.145215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.145415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.145444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.353 [2024-07-15 16:33:04.145633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.353 [2024-07-15 16:33:04.145657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.353 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.145878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.145907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.146120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.146149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.146354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.146383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.146610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.146634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.146827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.146857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.147034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.147063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.147255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.147284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.147515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.147541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.147770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.147799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.148001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.148030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.148242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.148270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.148513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.148537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.148764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.148808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.149007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.149032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.149281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.149314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.149514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.149538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.149731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.149768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.149982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.150022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.150181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.150210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.150429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.150469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.150682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.150711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.150857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.150897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.151099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.151128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.151302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.151327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.151533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.151586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.151784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.151814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.152074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.152103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.152345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.152369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.152523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.152553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.152798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.152827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.153021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.153050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.153296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.153320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.153622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.153652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.153814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.153843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.154030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.154059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.154303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.154326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.154613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.154662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.154922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.154952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.155153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.155181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.155346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.155371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.354 qpair failed and we were unable to recover it. 00:34:21.354 [2024-07-15 16:33:04.155618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.354 [2024-07-15 16:33:04.155647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.155862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.155893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.156063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.156092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.156341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.156366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.156617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.156646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.156868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.156897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.157096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.157124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.157293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.157317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.157550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.157603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.157781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.157808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.158041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.158070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.158310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.158348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.158541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.158570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.158716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.158758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.158905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.158933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.159149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.159188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.159448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.159496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.159656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.159685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.159895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.159921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.160128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.160154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.160347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.160406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.160613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.160642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.160836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.160866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.161062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.161100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.161310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.161359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.161547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.161576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.161771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.161800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.162008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.162048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.162307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.162358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.162495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.162524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.162721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.162765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.162934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.162960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.163160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.163227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.163426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.163455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.163639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.163669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.163868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.163910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.164059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.164087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.164292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.164321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.164467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.164496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.164681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.164705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.164841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.164882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.355 [2024-07-15 16:33:04.165113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.355 [2024-07-15 16:33:04.165142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.355 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.165329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.165363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.165541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.165581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.165790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.165848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.166034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.166063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.166251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.166280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.166492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.166517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.166719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.166751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.166952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.166981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.167210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.167239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.167440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.167464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.167664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.167693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.167897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.167924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.168149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.168178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.168377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.168401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.168602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.168632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.168812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.168841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.169070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.169098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.169301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.169326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.169581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.169640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.169807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.169835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.170061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.170090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.170262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.170287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.170431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.170500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.170734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.170778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.171010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.171039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.171264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.171303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.171491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.171541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.171783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.171818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.172052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.172081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.172248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.172273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.172482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.172532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.172730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.172764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.172925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.172955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.173171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.173210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.173428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.173484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.173678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.173707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.173911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.173941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.174137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.174161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.174421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.174473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.174678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.174706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.356 qpair failed and we were unable to recover it. 00:34:21.356 [2024-07-15 16:33:04.174905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.356 [2024-07-15 16:33:04.174934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.175147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.175172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.175426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.175477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.175680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.175708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.175910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.175940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.176159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.176197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.176364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.176418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.176644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.176673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.176910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.176936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.177085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.177111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.177341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.177403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.177615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.177643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.177855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.177884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.178097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.178122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.178332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.178387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.178561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.178590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.178783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.178811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.179042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.179081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.179228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.179253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.179496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.179525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.179687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.179715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.179920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.179945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.180176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.180221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.180433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.180462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.180651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.180680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.180910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.180937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.181086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.181114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.181343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.181372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.181624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.181654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.181842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.181869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.182120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.182167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.182441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.182470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.182708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.182743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.182916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.182942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.183144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.183191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.183451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.357 [2024-07-15 16:33:04.183479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.357 qpair failed and we were unable to recover it. 00:34:21.357 [2024-07-15 16:33:04.183736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.183770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.183990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.184015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.184193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.184240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.184458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.184486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.184694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.184723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.184901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.184927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.185126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.185173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.185325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.185352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.185578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.185607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.185777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.185804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.185980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.186023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.186209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.186239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.186467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.186496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.186695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.186724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.186941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.186968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.187144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.187172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.187414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.187443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.187635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.187663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.187901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.187928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.188075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.188105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.188306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.188335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.188548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.188572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.188752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.188781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.188931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.188957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.189146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.189175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.189314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.189339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.189556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.189614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.189805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.189835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.190070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.190099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.190284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.190309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.190500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.190551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.190709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.190756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.190917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.190945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.191147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.191172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.191322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.191385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.191506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.191534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.191683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.191711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.192564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.192597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.192859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.192888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.193087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.358 [2024-07-15 16:33:04.193116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.358 qpair failed and we were unable to recover it. 00:34:21.358 [2024-07-15 16:33:04.193347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.193377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.193576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.193602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.193823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.193853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.194030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.194059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.194236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.194266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.194495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.194522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.194681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.194716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.194880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.194910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.195950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.195975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.196134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.196183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.196381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.196410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.196623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.196651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.196867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.196893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.197082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.197129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.197328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.197356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.197542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.197571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.197736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.197779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.197932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.197961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.198175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.198204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.198390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.198419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.198586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.198612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.198814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.198843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.198993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.199022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.199238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.199267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.199452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.199489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.199639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.199668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.199845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.199875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.199987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.200016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.200260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.200289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.200551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.200598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.200829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.200857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.200988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.201016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.201181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.201221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.201426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.201465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.201706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.201736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.201866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.359 [2024-07-15 16:33:04.201895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.359 qpair failed and we were unable to recover it. 00:34:21.359 [2024-07-15 16:33:04.202086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.202126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.202271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.202320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.202534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.202561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.202732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.202766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.202907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.202933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.203053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.203077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.203227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.203256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.203404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.203433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.203670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.203699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.203897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.203924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.204099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.204327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.204356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.204594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.204630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.204827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.204854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.205007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.205052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.205275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.205303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.205509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.205534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.205751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.205779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.205928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.205957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.206145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.206174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.206428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.206453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.206642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.206671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.206841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.206870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.207024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.207052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.207217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.207242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.207507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.207553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.207763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.207801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.207978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.208167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.208193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.208420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.208467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.208702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.208731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.208878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.208906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.209064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.209104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.209276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.209334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.209567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.209595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.209805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.209834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.209989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.210015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.210180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.210206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.210430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.210471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.210685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.210714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.210899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.360 [2024-07-15 16:33:04.210925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.360 qpair failed and we were unable to recover it. 00:34:21.360 [2024-07-15 16:33:04.211071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.211097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.211271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.211299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.211502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.211531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.211778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.211819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.211934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.211960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.212148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.212176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.212369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.212398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.212584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.212612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.212841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.212867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.212976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.213000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.213165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.213192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.213427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.213452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.213656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.213685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.213867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.213893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.214064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.214091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.214268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.214294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.214482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.214510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.214671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.214700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.214845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.214875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.215001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.215033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.215234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.215277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.215493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.215522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.215765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.215806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.215966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.215992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.216191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.216238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.216411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.216440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.216600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.216629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.216821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.216847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.217001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.217030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.217245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.217274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.217460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.217489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.217686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.217711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.217873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.217902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.218081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.218109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.218290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.361 [2024-07-15 16:33:04.218326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.361 qpair failed and we were unable to recover it. 00:34:21.361 [2024-07-15 16:33:04.218504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.218530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.218774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.218811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.218926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.218955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.219114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.219142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.219366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.219391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.219632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.219662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.219823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.219853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.220001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.220040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.220195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.220221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.220411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.220444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.220663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.220692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.220887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.220921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.221109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.221135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.221368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.221419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.221616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.221644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.221837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.221864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.222009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.222035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.222271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.222317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.222430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.222458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.222665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.222694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.222836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.222862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.223066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.223095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.223246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.223276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.223457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.223485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.223653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.223678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.223878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.223907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.224066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.224095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.224228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.224257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.225374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.225417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.225589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.225619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.225754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.225784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.226587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.226626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.226803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.226831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.226951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.226992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.227127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.227156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.227309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.227348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.227538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.227564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.227796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.227826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.227950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.227983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.228144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.228173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.228362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.362 [2024-07-15 16:33:04.228388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.362 qpair failed and we were unable to recover it. 00:34:21.362 [2024-07-15 16:33:04.228589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.228618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.228820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.228850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.229007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.229036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.229197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.229247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.229441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.229484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.229685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.229714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.229891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.229919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.230190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.230217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.230462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.230522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.230669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.230698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.230876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.230902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.231092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.231118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.231376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.231424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.231645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.231675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.231852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.231881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.232086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.232124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.232382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.232428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.232613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.232642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.232826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.232855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.233046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.233071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.233259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.233304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.233517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.233547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.233716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.233751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.233947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.233973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.234160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.234189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.234427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.234603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.234639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.234787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.234814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.234953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.234979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.235111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.235139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.235363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.235391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.235624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.235651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.235838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.235864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.235983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.236010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.236183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.236212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.236335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.236380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.236618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.236647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.236844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.236870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.237075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.237117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.237264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.237289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.237479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.237507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.363 qpair failed and we were unable to recover it. 00:34:21.363 [2024-07-15 16:33:04.237703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.363 [2024-07-15 16:33:04.237732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.237909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.237935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.238085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.238114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.238349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.238374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.238559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.238595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.238811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.238848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.238995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.239021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.239171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.239214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.239446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.239475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.239701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.239730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.239904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.239930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.240066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.240105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.240308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.240333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.240502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.240537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.240743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.240789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.240901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.240926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.241117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.241141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.241330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.241354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.241516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.241566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.241693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.241721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.241871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.241897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.242053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.242078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.242231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.242287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.242439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.242467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.242702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.242735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.242923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.242950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.243120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.243149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.243293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.243321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.243549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.243598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.243771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.243809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.243957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.243983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.244127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.244170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.244329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.244358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.244557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.244585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.244764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.244816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.244966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.244992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.245174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.245203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.245336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.245361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.245510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.245551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.245704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.245732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.245862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.245887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.364 qpair failed and we were unable to recover it. 00:34:21.364 [2024-07-15 16:33:04.246054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.364 [2024-07-15 16:33:04.246079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.246212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.246240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.246430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.246459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.246642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.246671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.246823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.246849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.246983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.247213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.247403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.247651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.247804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.247949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.247980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.248143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.248172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.248279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.248316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.248548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.248578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.248766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.248816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.248937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.248963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.249089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.249115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.249274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.249303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.249482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.249510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.249659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.249687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.249849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.249875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.250025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.250067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.250229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.250259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.250429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.250487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.250679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.250708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.250867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.250893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.251927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.251953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.252082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.252121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.252304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.252333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.252476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.252504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.365 [2024-07-15 16:33:04.252625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.365 [2024-07-15 16:33:04.252653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.365 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.252769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.252812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.252962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.252988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.253158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.253337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.253524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.253703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.253864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.253979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.254177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.254525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.254702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.254916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.254943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.255071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.255096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.255258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.255286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.255465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.255494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.255643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.255667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.255860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.255887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.256062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.256257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.256437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.256633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.256833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.256980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.257181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.257373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.257547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.257721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.257927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.257953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.258118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.258147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.258329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.258357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.258484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.258512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.258667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.258706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.258882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.258909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.259957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.259983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.366 [2024-07-15 16:33:04.260142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.366 [2024-07-15 16:33:04.260180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.366 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.260399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.260428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.260559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.260590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.260762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.260805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.260937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.260963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.261137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.261199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.261377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.261405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.261522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.261550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.261693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.261717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.261876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.261902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.262084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.262274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.262472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.262659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.262835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.262979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.263130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.263309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.263489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.263694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.263916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.263942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.264137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.264199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.264312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.264341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.264461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.264490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.264643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.264668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.264862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.264889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.265888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.265918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.266961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.266987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.267157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.267185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.267378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.267406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.367 [2024-07-15 16:33:04.267599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.367 [2024-07-15 16:33:04.267624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.367 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.267810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.267839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.267965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.267994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.268180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.268208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.268336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.268375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.268571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.268600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.268711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.268747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.268882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.268911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.269095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.269300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.269493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.269627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.269796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.269977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.270168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.270352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.270547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.270749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.270948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.270974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.271144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.271172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.271361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.271384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.271554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.271583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.271766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.271816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.271996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.272184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.272391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.272598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.272774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.272927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.272953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.273159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.273298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.273475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.273681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.273885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.273992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.274207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.274360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.274598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.274735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.274917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.274946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.275159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.275182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.368 qpair failed and we were unable to recover it. 00:34:21.368 [2024-07-15 16:33:04.275365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.368 [2024-07-15 16:33:04.275414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.275565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.275594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.275761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.275790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.275949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.275975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.276135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.276176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.276327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.276356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.276518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.276546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.276678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.276716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.276847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.276890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.277971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.277997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.278175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.278199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.278325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.278362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.278510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.278538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.278715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.278750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.278926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.278952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.279100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.279186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.279363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.279391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.279568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.279597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.279783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.279821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.279985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.280202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.280355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.280562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.280781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.280920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.280949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.281104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.281133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.281323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.281346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.281493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.281521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.281663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.281691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.281817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.281846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.282906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.369 [2024-07-15 16:33:04.282931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.369 qpair failed and we were unable to recover it. 00:34:21.369 [2024-07-15 16:33:04.283101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.283129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.283308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.283336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.283523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.283551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.283668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.283692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.283868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.283895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.284955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.284977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.285952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.285978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.286097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.286123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.286300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.286336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.286492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.286520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.286673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.286700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.287773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.287817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.288876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.288902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.289918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.289947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.290117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.290158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.290315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.370 [2024-07-15 16:33:04.290344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.370 qpair failed and we were unable to recover it. 00:34:21.370 [2024-07-15 16:33:04.290497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.290526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.290666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.290695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.290845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.290871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.290987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.291182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.291377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.291532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.291751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.291923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.291949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.292117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.292146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.292297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.292321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.292477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.292506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.292691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.292719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.292891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.292920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.293051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.293093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.293265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.293307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.293427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.293456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.371 [2024-07-15 16:33:04.293617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.371 [2024-07-15 16:33:04.293645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.371 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.293818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.293846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.293953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.293980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.294120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.294150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.294273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.294302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.294464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.294490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.294703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.294731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.294886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.294916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.295096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.295124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.295248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.295274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.295406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.295432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.295589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.651 [2024-07-15 16:33:04.295617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.651 qpair failed and we were unable to recover it. 00:34:21.651 [2024-07-15 16:33:04.295776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.295805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.295935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.295961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.296965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.296994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.297182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.297210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.297337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.297363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.297554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.297583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.297732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.297782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.297914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.297940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.298925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.298968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.299136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.299165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.299291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.299319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.299448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.299474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.299619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.299645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.299824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.299854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.300928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.300955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.301925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.301954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.302068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.302095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.302247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.302273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.302454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.302483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.302658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.652 [2024-07-15 16:33:04.302687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.652 qpair failed and we were unable to recover it. 00:34:21.652 [2024-07-15 16:33:04.302834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.302863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.302997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.303876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.303982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.304189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.304400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.304542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.304709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.304908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.304934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.305922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.305966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.306965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.306990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.307944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.307970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.308142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.308171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.308310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.308338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.308500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.308526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.308674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.308718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.308874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.308904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.309056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.309258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.309381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.309573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.653 [2024-07-15 16:33:04.309789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.653 qpair failed and we were unable to recover it. 00:34:21.653 [2024-07-15 16:33:04.309930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.309956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.310946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.310975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.311157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.311297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.311455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.311624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.311845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.311995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.312877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.312978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.313136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.313337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.313569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.313752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.313930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.313956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.314132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.314161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.314304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.314332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.314524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.314553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.314720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.314752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.314894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.314937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.315093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.315121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.315245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.315274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.315424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.315448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.315667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.315695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.315856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.315883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.316855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.654 [2024-07-15 16:33:04.316882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.654 qpair failed and we were unable to recover it. 00:34:21.654 [2024-07-15 16:33:04.317004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.317206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.317409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.317584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.317714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.317888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.317917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.318932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.318958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.319151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.319179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.319363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.319391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.319542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.319571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.319758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.319784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.319918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.319946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.320137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.320165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.320359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.320388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.320568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.320593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.320775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.320805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.320961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.320989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.321171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.321324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.321495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.321662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.321834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.321987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.322138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.322364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.322572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.322793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.322938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.322964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.323942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.655 [2024-07-15 16:33:04.323968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.655 qpair failed and we were unable to recover it. 00:34:21.655 [2024-07-15 16:33:04.324115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.324155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.324405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.324434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.324585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.324614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.324768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.324798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.324928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.324954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.325112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.325152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.325335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.325363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.325537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.325577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.325770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.325796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.325934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.325963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.326116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.326144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.326328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.326356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.326563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.326588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.326730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.326812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.326932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.326960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.327102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.327130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.327312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.327337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.327491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.327519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.327668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.327696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.327847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.327876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.328965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.328994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.329148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.329176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.329360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.329393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.329590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.329616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.329724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.329769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.329961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.329990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.330136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.330164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.330294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.330319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.330553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.330592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.330751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.656 [2024-07-15 16:33:04.330794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.656 qpair failed and we were unable to recover it. 00:34:21.656 [2024-07-15 16:33:04.330944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.330970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.331162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.331187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.331393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.331444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.331590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.331618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.331766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.331795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.331984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.332010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.332216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.332249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.332415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.332443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.332617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.332646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.332803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.332830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.332969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.333011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.333198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.333227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.333437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.333465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.333631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.333655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.333870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.333899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.334085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.334113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.334263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.334291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.334456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.334480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.334676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.334704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.334839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.334865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.335006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.335045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.335191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.335215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.335375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.335425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.335642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.335671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.335817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.335846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.336022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.336047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.336281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.336331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.336530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.336559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.336684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.336712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.336888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.336914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.337108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.337257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.337431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.337606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.337847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.337990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.338166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.338317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.338498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.338689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.338877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.338906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.657 qpair failed and we were unable to recover it. 00:34:21.657 [2024-07-15 16:33:04.339105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.657 [2024-07-15 16:33:04.339129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.339265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.339294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.339482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.339510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.339741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.339770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.339933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.339959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.340132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.340202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.340421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.340450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.340601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.340630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.340767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.340808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.340990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.341019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.341213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.341242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.341458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.341486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.341619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.341661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.341842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.341872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.342043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.342068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.342226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.342254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.342447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.342470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.342705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.342734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.342909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.342935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.343112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.343141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.343281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.343319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.343568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.343619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.343742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.343772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.343919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.343948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.344146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.344169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.344371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.344422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.344579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.344607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.344765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.344795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.344964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.344989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.345173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.345202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.345383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.345411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.345537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.345564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.345757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.345784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.345962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.345991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.346230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.346258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.346462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.346490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.346697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.346742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.346923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.346951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.347156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.347196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.347405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.347434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.347558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.347601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.658 qpair failed and we were unable to recover it. 00:34:21.658 [2024-07-15 16:33:04.347803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.658 [2024-07-15 16:33:04.347832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.347941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.347969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.348115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.348144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.348296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.348321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.348488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.348531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.348707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.348736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.348895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.348923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.349132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.349156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.349339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.349399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.349576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.349604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.349761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.349790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.349957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.349982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.350131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.350171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.350353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.350382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.350532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.350560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.350732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.350765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.350909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.350935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.351079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.351108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.351286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.351315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.351499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.351524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.351724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.351757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.351895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.351921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.352074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.352102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.352321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.352346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.352548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.352605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.352726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.352765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.352923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.352955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.353130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.353165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.353362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.353390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.353571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.353600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.353784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.353813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.353980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.354006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.354180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.354240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.354456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.354485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.354672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.354701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.354905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.354931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.355088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.355139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.355264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.355292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.355513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.355542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.355724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.355769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.355896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.355924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.659 [2024-07-15 16:33:04.356112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.659 [2024-07-15 16:33:04.356140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.659 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.356342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.356370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.356523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.356546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.356698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.356756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.356895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.356924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.357100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.357128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.357321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.357345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.357537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.357596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.357765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.357805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.357987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.358193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.358336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.358559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.358706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.358926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.358952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.359148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.359209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.359389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.359418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.359633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.359661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.359840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.359867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.359992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.360018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.360236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.360265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.360416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.360444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.360579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.360623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.360817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.360844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.360991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.361033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.361205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.361234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.361439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.361468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.361645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.361674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.361841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.361871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.362017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.362045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.362222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.362246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.362462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.362513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.362717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.362752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.362949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.362978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.363160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.363184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.363347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.363411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.660 qpair failed and we were unable to recover it. 00:34:21.660 [2024-07-15 16:33:04.363557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.660 [2024-07-15 16:33:04.363585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.363776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.363805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.363985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.364027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.364183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.364233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.364444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.364473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.364609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.364638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.364876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.364902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.365033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.365061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.365234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.365262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.365441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.365470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.365627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.365655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.365851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.365878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.366106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.366135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.366341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.366370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.366621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.366646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.366800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.366824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.366995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.367033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.367209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.367241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.367457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.367482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.367668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.367697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.367849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.367875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.368061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.368085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.368242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.368266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.368479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.368528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.368681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.368709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.368902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.368931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.369173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.369199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.369350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.369401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.369576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.369604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.369773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.369803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.369926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.369951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.370158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.370225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.370404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.370432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.370562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.370588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.370749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.370772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.370987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.371016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.371176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.371205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.371366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.371394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.371562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.371586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.371792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.371859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.372099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.372128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.661 qpair failed and we were unable to recover it. 00:34:21.661 [2024-07-15 16:33:04.372331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.661 [2024-07-15 16:33:04.372359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.372579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.372603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.372807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.372862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.373043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.373076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.373246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.373275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.373446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.373469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.373682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.373710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.373898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.373926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.374108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.374136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.374345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.374368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.374582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.374611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.374819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.374849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.375020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.375049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.375193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.375225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.375468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.375502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.375728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.375762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.375909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.375937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.376169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.376193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.376449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.376498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.376657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.376686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.376930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.376961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.377185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.377209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.377398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.377445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.377650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.377679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.377816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.377846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.378057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.378081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.378289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.378350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.378532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.378560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.378733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.378768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.378955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.378979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.379168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.379232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.379431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.379460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.379666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.379695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.379872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.379896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.380134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.380185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.380361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.380390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.380573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.380601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.380766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.380789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.381017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.381046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.381215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.381244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.381389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.662 [2024-07-15 16:33:04.381418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.662 qpair failed and we were unable to recover it. 00:34:21.662 [2024-07-15 16:33:04.381593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.381621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.381815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.381842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.382034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.382062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.382280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.382309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.382527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.382551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.382751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.382780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.382951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.382976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.383203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.383231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.383447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.383471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.383698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.383726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.383927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.383956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.384138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.384167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.384382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.384406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.384577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.384605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.384818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.384847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.385028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.385057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.385222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.385245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.385478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.385528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.385751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.385780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.385950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.385979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.386132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.386156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.386385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.386436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.386653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.386682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.386872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.386901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.387099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.387138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.387294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.387349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.387498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.387527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.387704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.387732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.387952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.387976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.388148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.388217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.388373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.388406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.388615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.388644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.388853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.388878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.389036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.389074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.389255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.389284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.389462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.389496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.389680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.389708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.389914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.389941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.390154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.390183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.390351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.390380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.390558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.390586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.663 qpair failed and we were unable to recover it. 00:34:21.663 [2024-07-15 16:33:04.390809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.663 [2024-07-15 16:33:04.390835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.391043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.391068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.391254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.391283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.391476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.391499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.391673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.391702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.391849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.391889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.392075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.392103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.392277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.392301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.392514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.392565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.392680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.392708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.392949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.392978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.393196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.393230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.393425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.393473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.393686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.393715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.393907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.393936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.394148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.394172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.394369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.394426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.394672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.394702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.394936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.394965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.395150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.395174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.395383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.395434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.395686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.395715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.395956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.395985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.396210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.396234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.396472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.396522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.396716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.396752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.396978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.397007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.397190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.397214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.397417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.397467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.397611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.397639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.397808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.397837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.398012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.398051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.398288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.398338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.398519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.398548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.398691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.398719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.398895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.664 [2024-07-15 16:33:04.398921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.664 qpair failed and we were unable to recover it. 00:34:21.664 [2024-07-15 16:33:04.399066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.399090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.399319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.399347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.399564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.399592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.399822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.399847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.400066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.400130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.400347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.400375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.400629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.400658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.400842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.400871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.401056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.401085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.401250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.401278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.401492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.401521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.401702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.401730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.401887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.401912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.402136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.402165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.402339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.402368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.402534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.402558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.402760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.402789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.402975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.403004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.403174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.403202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.403395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.403418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.403587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.403616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.403821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.403851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.404029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.404058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.404253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.404276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.404488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.404538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.404684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.404713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.404859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.404888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.405050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.405074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.405259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.405288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.405513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.405542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.405682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.405710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.405908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.405944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.406183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.406233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.406415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.406444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.406649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.406677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.406826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.406852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.407001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.407053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.407237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.407265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.407413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.407442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.407619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.407648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.665 [2024-07-15 16:33:04.407836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.665 [2024-07-15 16:33:04.407861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.665 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.408066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.408094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.408313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.408342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.408556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.408580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.408806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.408832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.408984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.409016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.409217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.409246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.409468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.409492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.409687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.409722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.409917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.409944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.410080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.410107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.410307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.410330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.410515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.410565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.410726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.410762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.410982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.411010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.411206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.411229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.411496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.411557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.411713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.411755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.411918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.411946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.412183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.412207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.412361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.412409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.412581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.412610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.412794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.412834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.413019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.413042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.413268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.413327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.413510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.413538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.413715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.413751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.413933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.413958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.414172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.414222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.414439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.414468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.414683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.414712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.414932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.414958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.415119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.415167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.415377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.415407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.415622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.415650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.415882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.415911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.416169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.416225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.416413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.416442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.416667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.416707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.416952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.416977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.417229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.417284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.666 [2024-07-15 16:33:04.417477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.666 [2024-07-15 16:33:04.417506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.666 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.417678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.417706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.417903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.417930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.418180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.418235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.418471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.418500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.418716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.418785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.418992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.419016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.419293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.419349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.419526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.419554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.419703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.419731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.419971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.419997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.420232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.420279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.420445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.420473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.420617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.420656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.420878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.420904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.421164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.421217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.421390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.421419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.421640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.421669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.421881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.421907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.422112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.422141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.422361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.422390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.422578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.422612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.422801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.422827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.423029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.423058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.423210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.423240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.423462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.423491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.423684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.423709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.423958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.423988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.424168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.424197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.424330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.424357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.424553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.424577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.424834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.424886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.425109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.425138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.425351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.425380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.425541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.425563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.425827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.425857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.426114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.426144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.426403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.426432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.426648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.426673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.426849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.426874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.427112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.427141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.667 qpair failed and we were unable to recover it. 00:34:21.667 [2024-07-15 16:33:04.427296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.667 [2024-07-15 16:33:04.427335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.427501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.427525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.427768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.427797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.427991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.428021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.428193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.428222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.428426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.428451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.428638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.428668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.428888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.428917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.429137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.429166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.429353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.429377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.429612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.429641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.429818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.429842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.430015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.430059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.430212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.430235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.430446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.430496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.430718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.430760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.430981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.431010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.431244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.431269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.431517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.431565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.431760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.431789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.431972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.432001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.432180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.432204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.432378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.432433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.432665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.432693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.432926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.432956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.433138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.433162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.433380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.433432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.433571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.433600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.433770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.433800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.434001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.434044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.434233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.434280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.434504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.434533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.434759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.434788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.434976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.435000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.435216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.435264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.435449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.435478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.435662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.435691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.435892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.668 [2024-07-15 16:33:04.435918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.668 qpair failed and we were unable to recover it. 00:34:21.668 [2024-07-15 16:33:04.436168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.436221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.436392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.436421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.436598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.436626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.436844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.436870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.437103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.437159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.437383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.437411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.437631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.437660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.437807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.437833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.438044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.438073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.438243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.438271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.438423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.438455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.438700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.438730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.438948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.438974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.439208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.439237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.439424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.439453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.439676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.439705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.439918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.439945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.440112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.440140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.440316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.440345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.440560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.440584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.440732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.440790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.440977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.441005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.441226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.441255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.441475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.441498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.441658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.441686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.441880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.441906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.442108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.442137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.442337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.442361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.442561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.442611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.442779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.442820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.443059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.443100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.443269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.443293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.443488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.443536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.443729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.443764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.443952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.443981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.444183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.444207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.444405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.444456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.444646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.444678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.444898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.669 [2024-07-15 16:33:04.444928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.669 qpair failed and we were unable to recover it. 00:34:21.669 [2024-07-15 16:33:04.445137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.445161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.445348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.445398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.445616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.445644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.445800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.445829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.446018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.446043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.446220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.446275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.446459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.446498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.446698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.446727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.446932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.446957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.447134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.447197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.447374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.447403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.447637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.447666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.447877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.447903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.448148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.448203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.448435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.448464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.448669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.448698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.448894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.448919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.449167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.449219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.449409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.449438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.449636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.449665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.449889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.449916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.450106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.450160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.450381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.450409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.450597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.450625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.450844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.450869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.451065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.451094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.451281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.451310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.451512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.451542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.451792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.451824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.452074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.452123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.452348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.452377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.452564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.452593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.452788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.452830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.452991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.453017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.453214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.453242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.453412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.453441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.453630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.453654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.453899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.453930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.454120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.454147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.454335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.454364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.454567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.670 [2024-07-15 16:33:04.454591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.670 qpair failed and we were unable to recover it. 00:34:21.670 [2024-07-15 16:33:04.454731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.454765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.454953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.454981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.455190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.455218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.455438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.455462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.455648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.455676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.455894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.455923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.456107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.456136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.456356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.456380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.456535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.456564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.456743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.456785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.456986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.457012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.457164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.457188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.457417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.457484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.457676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.457704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.457910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.457939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.458135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.458159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.458370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.458422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.458597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.458626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.458815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.458844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.459078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.459102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.459295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.459345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.459570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.459599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.459754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.459783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.459967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.459992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.460199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.460250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.460440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.460473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.460656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.460685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.460883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.460907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.461163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.461214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.461444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.461473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.461666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.461695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.461862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.461889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.462070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.462099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.462240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.462269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.462446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.462474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.462650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.462673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.462864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.462893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.463064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.463093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.463274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.463303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.463529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.463553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.463746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.671 [2024-07-15 16:33:04.463776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.671 qpair failed and we were unable to recover it. 00:34:21.671 [2024-07-15 16:33:04.464006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.464035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.464260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.464289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.464473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.464497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.464727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.464763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.465009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.465047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.465198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.465226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.465462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.465485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.465676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.465704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.465928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.465954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.466175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.466204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.466382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.466406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.466610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.466642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.466824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.466850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.467055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.467081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.467267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.467291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.467538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.467588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.467828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.467858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.468091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.468119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.468314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.468339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.468549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.468578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.468786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.468815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.468968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.468997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.469234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.469258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.469507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.469556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.469750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.469780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.470011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.470040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.470284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.470308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.470563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.470613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.470787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.470816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.471041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.471070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.471306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.471330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.471544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.471594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.471829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.471858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.472027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.472057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.472228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.472251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.472454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.472505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.472632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.472665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.472892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.472923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.473127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.473154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.473398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.473443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.473602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.473631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.672 qpair failed and we were unable to recover it. 00:34:21.672 [2024-07-15 16:33:04.473861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.672 [2024-07-15 16:33:04.473891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.474107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.474131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.474341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.474392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.474634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.474662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.474853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.474883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.475081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.475119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.475335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.475386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.475620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.475649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.475890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.475916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.476099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.476123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.476295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.476344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.476543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.476571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.476731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.476765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.476933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.476956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.477219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.477248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.477411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.477440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.477692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.477721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.478023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.478064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.478342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.478390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.478591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.478630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.478841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.478880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.479078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.479111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.479344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.479373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.479555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.479594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.479817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.479848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.480087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.480112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.480306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.480356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.480496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.480523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.480711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.480745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.480945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.480971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.481206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.481253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.481450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.481479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.481708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.481742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.481975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.482001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.482294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.482345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.482605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.673 [2024-07-15 16:33:04.482634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.673 qpair failed and we were unable to recover it. 00:34:21.673 [2024-07-15 16:33:04.482879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.482909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.483161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.483186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.483425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.483475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.483728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.483765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.483982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.484011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.484259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.484284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.484555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.484609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.484880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.484905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.485170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.485199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.485375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.485399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.485611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.485640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.485824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.485852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.486045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.486074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.486305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.486329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.486546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.486597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.486837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.486867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.487070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.487099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.487329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.487353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.487565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.487594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.487788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.487818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.488006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.488035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.488222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.488245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.488493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.488544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.488768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.488797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.489024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.489053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.489274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.489298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.489507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.489558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.489758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.489786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.489932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.489962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.490112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.490153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.490352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.490419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.490673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.490702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.490907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.490937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.491171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.491195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.491400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.491449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.491679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.491708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.491894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.491923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.492166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.492191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.492398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.492427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.492658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.492687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.492890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.492920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.493147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.493171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.674 qpair failed and we were unable to recover it. 00:34:21.674 [2024-07-15 16:33:04.493426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.674 [2024-07-15 16:33:04.493478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.493722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.493758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.493937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.493963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.494133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.494172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.494420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.494470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.494725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.494766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.495000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.495029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.495258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.495282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.495485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.495534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.495763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.495792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.495988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.496017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.496222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.496246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.496502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.496559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.496754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.496782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.496977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.497012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.497212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.497237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.497474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.497525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.497753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.497783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.497996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.498025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.498183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.498207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.498456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.498506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.498699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.498728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.498893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.498923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.499096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.499135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.499329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.499380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.499610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.499639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.499825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.499853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.500083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.500107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.500333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.500382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.500586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.500616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.500840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.500869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.501116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.501140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.501411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.501460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.501656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.501684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.501910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.501940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.502126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.502150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.502326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.502377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.502613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.502642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.502810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.502836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.503018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.503041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.503269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.503319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.503546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.503574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.503785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.503814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.504025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.504192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.675 [2024-07-15 16:33:04.504245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.675 qpair failed and we were unable to recover it. 00:34:21.675 [2024-07-15 16:33:04.504433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.504461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.504615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.504643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.504871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.504897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.505155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.505208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.505479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.505508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.505747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.505776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.505962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.505987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.506167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.506252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.506452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.506481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.506708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.506744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.506953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.506979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.507197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.507249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.507446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.507475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.507712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.507748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.507985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.508010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.508204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.508254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.508445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.508474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.508675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.508704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.508951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.508978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.509218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.509269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.509511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.509540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.509731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.509767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.510008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.510033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.510260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.510312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.510478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.510505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.510747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.510777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.511033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.511057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.511285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.511335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.511533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.511560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.511808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.511833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.512033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.512057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.512276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.512327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.512558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.512587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.512813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.512843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.513071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.513112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.513326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.513378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.513612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.513641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.513869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.513903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.514144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.514168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.514351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.514380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.514609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.514638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.514826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.514854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.515088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.515112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.515297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.515348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.676 qpair failed and we were unable to recover it. 00:34:21.676 [2024-07-15 16:33:04.515574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.676 [2024-07-15 16:33:04.515603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.515790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.515820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.516008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.516033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.516236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.516289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.516464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.516492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.516674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.516703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.516939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.516966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.517211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.517259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.517493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.517522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.517671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.517699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.517943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.517985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.518196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.518250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.518517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.518546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.518784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.518813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.518989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.519028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.519241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.519293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.519530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.519559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.519755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.519784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.519989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.520014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.520269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.520320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.520514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.520547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.520806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.520833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.521043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.521068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.521247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.521296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.521550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.521579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.521838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.521868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.522102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.522126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.522342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.522391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.522631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.522660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.522925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.522955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.523151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.523175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.523438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.523489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.523680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.523709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.523927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.523956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.524172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.524196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.524456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.524507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.524748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.524778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.525016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.525044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.525303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.525327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.525477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.525528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.525772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.677 [2024-07-15 16:33:04.525802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.677 qpair failed and we were unable to recover it. 00:34:21.677 [2024-07-15 16:33:04.526040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.526069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.526296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.526578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.526629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.526861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.526891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.527083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.527112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.527295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.527319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.527480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.527541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.527783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.527812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.528049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.528077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.528309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.528333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.528577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.528626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.528824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.528853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.529010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.529039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.529272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.529296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.529566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.529616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.529820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.529846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.530095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.530124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.530343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.530367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.530607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.530636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.530874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.530903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.531087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.531117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.531342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.531366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.531529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.531557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.531764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.531792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.531972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.532001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.532213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.532236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.532453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.532502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.532745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.532775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.532916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.532945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.533176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.533200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.533465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.533517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.533729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.533774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.534018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.534046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.534266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.534306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.534512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.534566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.534723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.534760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.534911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.534940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.535143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.535182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.535370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.535419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.535602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.535632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.535876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.535906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.536116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.536140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.536413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.536464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.536685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.536714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.536908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.536937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.537155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.537180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.537447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.537500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.537761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.678 [2024-07-15 16:33:04.537791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.678 qpair failed and we were unable to recover it. 00:34:21.678 [2024-07-15 16:33:04.538035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.538064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.538271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.538296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.538502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.538552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.538802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.538827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.539073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.539103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.539296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.539321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.539493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.539578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.539795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.539824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.540032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.540062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.540295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.540319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.540548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.540598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.540810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.540839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.541034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.541061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.541302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.541327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.541603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.541655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.541855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.541884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.542110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.542139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.542350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.542374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.542594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.542623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.542821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.542850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.543049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.543078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.543279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.543303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.543575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.543626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.543874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.543903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.544089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.544118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.544318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.544342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.544553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.544587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.544795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.544825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.545062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.545091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.545329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.545353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.545579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.545609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.545769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.545799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.545998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.546028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.546242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.546267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.546484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.546534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.546764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.546793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.547012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.547040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.547261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.547285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.547517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.547566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.547762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.547804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.548011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.548053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.548271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.548295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.548508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.548558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.548768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.548797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.549014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.549043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.549225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.549249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.549513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.549563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.549820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.549849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.679 qpair failed and we were unable to recover it. 00:34:21.679 [2024-07-15 16:33:04.550053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.679 [2024-07-15 16:33:04.550082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.550304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.550328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.550593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.550641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.550879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.550908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.551109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.551138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.551368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.551395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.551652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.551703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.551915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.551941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.552140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.552169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.552307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.552329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.552604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.552662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.552904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.552934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.553141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.553170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.553413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.553437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.553640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.553669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.553906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.553935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.554175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.554204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.554444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.554468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.554752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.554787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.554960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.554989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.555229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.555258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.555512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.555536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.555788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.555819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.556053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.556082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.556301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.556330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.556580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.556604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.556807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.556838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.557057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.557086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.557331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.557360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.557556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.557580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.557767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.557819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.558013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.558042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.558290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.558319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.558590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.558614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.558829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.558872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.559087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.559116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.559355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.559384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.559594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.559619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.559810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.559840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.560032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.560062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.560203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.560232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.560442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.560466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.560717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.560757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.560971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.560997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.561254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.561283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.561497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.561521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.561703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.680 [2024-07-15 16:33:04.561731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.680 qpair failed and we were unable to recover it. 00:34:21.680 [2024-07-15 16:33:04.561910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.561940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.562187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.562216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.562458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.562482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.562686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.562716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.562917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.562943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.563202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.563231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.563435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.563458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.563685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.563714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.563911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.563936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.564130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.564159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.564349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.564373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.564627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.564675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.564884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.564914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.565125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.565154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.565403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.565427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.565598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.565627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.565778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.565808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.565990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.566019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.566256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.566280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.566587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.566638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.566855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.566884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.567116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.567146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.567343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.567367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.567570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.567609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.567763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.567791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.567946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.567985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.568213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.568241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.568486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.568535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.568732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.568765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.569017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.569046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.569208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.569232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.569511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.569559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.569810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.569840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.570076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.570104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.570299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.570323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.570546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.570598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.570810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.570839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.571035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.571064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.571307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.571331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.571513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.571562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.571734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.571768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.571954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.571982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.572212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.681 [2024-07-15 16:33:04.572236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.681 qpair failed and we were unable to recover it. 00:34:21.681 [2024-07-15 16:33:04.572434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.572482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.572682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.572710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.572932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.572958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.573187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.573211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.573432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.573480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.573686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.573713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.573922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.573950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.574194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.574219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.574450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.574498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.574697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.574727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.574941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.574974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.575211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.575235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.575439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.575489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.575667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.575696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.575903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.575931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.576117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.576142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.576396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.576446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.576645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.576674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.576854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.576896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.577128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.577152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.577416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.577464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.577710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.577753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.577947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.577982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.578218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.578242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.578514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.578565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.578824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.578854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.579120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.579149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.579333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.579355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.579629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.579676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.579961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.579990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.580242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.580271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.580529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.580553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.580730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.580766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.580940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.580969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.581231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.581260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.581501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.581526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.581792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.581827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.582073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.582107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.582278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.582306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.582548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.582572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.582820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.582849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.583032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.583060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.583225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.583251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.583458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.583481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.583695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.583723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.583968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.583996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.584196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.584224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.584459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.584482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.584701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.584729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.584988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.585016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.585262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.682 [2024-07-15 16:33:04.585292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.682 qpair failed and we were unable to recover it. 00:34:21.682 [2024-07-15 16:33:04.585468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.585491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.585714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.585758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.585940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.585968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.586175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.586204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.586439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.586463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.586681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.586710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.586955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.586981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.587214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.587244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.587498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.587522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.587773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.587803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.587971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.587998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.588201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.588230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.588481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.588504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.588711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.588745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.588989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.589017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.589169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.589196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.589443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.589467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.589682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.589710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.589971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.589996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.590206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.590235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.590474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.590498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.590754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.590784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.590976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.591005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.591246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.591276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.591480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.591504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.591753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.591783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.592029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.592058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.592267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.592297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.592504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.592528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.592733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.592769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.592986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.593016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.593187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.593216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.593456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.593480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.593747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.593777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.594032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.594061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.594269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.594298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.594526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.594550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.594775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.594805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.595042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.595072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.595292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.595321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.595525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.595549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.595815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.595846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.596082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.596111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.596312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.596340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.596588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.596613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.596866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.596896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.597057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.597085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.597297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.597326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.597554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.597579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.597800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.597849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.598064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.598093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.683 qpair failed and we were unable to recover it. 00:34:21.683 [2024-07-15 16:33:04.598247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.683 [2024-07-15 16:33:04.598274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.598518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.598543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.598744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.598773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.598986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.599015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.599259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.599288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.599525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.599565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.599780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.599809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.599983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.600011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.600206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.600232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.600411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.600435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.600619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.600648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.600883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.600913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.601183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.601357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.601383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.601629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.601679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.601924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.601951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.602180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.602210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.602414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.602455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.602632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.602659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.602829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.602857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.603111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.603140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.603380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.603406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.603687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.603717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.603910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.603937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.604157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.604185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.604399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.604425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.604674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.604704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.604923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.604950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.605112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.605152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.605395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.605428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.605665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.605712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.606037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.606072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.606337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.606366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.606635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.606676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.606929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.606960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.607105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.607132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.607313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.607341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.607574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.607600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.607895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.607933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.608145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.608190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.608433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.608463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.608661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.608696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.608924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.608954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.609131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.609160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.609300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.609330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.609538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.609563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.609831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.609875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.684 qpair failed and we were unable to recover it. 00:34:21.684 [2024-07-15 16:33:04.610075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.684 [2024-07-15 16:33:04.610111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.685 qpair failed and we were unable to recover it. 00:34:21.685 [2024-07-15 16:33:04.610327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.610357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.610597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.610624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.610836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.610867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.611097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.611126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.611378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.611407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.611631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.611657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.611879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.611909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.612139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.612169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.612410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.612440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.612670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.612697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.612925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.612956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.613142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.613172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.971 [2024-07-15 16:33:04.613328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.971 [2024-07-15 16:33:04.613358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.971 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.613566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.613593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.613841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.613869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.614053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.614082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.614319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.614348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.614533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.614559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.614751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.614795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.614958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.614985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.615124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.615152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.615410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.615437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.615686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.972 [2024-07-15 16:33:04.615714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.972 qpair failed and we were unable to recover it. 00:34:21.972 [2024-07-15 16:33:04.615945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.615972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.616158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.616185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.616359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.616385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.616601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.616629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.616847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.616875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.617063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.617089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.617324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.617350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.617501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.617529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.617715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.617749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.618009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.618036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.618226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.618251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.618469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.618495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.618701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.618728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.973 qpair failed and we were unable to recover it. 00:34:21.973 [2024-07-15 16:33:04.618968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.973 [2024-07-15 16:33:04.618995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.619218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.619244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.619511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.619538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.619752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.619779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.619936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.619962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.620206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.620232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.620441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.620468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.620625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.620650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.620869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.620896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.621111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.621136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.621315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.621341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.621524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.621550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.621786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.621813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.622057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.622083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.622342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.974 [2024-07-15 16:33:04.622372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.974 qpair failed and we were unable to recover it. 00:34:21.974 [2024-07-15 16:33:04.622612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.622638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.622895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.622922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.623133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.623158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.623384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.623410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.623598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.623624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.623845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.623872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.624073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.624114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.624350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.624376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.624569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.624593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.624812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.624839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.625005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.625030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.625277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.625302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.625492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.625518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.975 [2024-07-15 16:33:04.625681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.975 [2024-07-15 16:33:04.625706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.975 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.625968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.625995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.626192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.626218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.626434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.626460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.626686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.626711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.626982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.627009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.627202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.627227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.627478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.627504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.627763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.627790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.627991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.628017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.628224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.976 [2024-07-15 16:33:04.628250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.976 qpair failed and we were unable to recover it. 00:34:21.976 [2024-07-15 16:33:04.628446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.628471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.628672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.628698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.628948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.628979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.629195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.629220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.629485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.629510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.629755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.629782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.630024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.630050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.630291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.630317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.630519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.630544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.630798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.630825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.631064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.631089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.631260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.631286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.631511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.631536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.631721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.977 [2024-07-15 16:33:04.631768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.977 qpair failed and we were unable to recover it. 00:34:21.977 [2024-07-15 16:33:04.631942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.631968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.632172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.632213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.632473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.632499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.632767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.632808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.633009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.633035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.633227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.633253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.633469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.633494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.978 qpair failed and we were unable to recover it. 00:34:21.978 [2024-07-15 16:33:04.633680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.978 [2024-07-15 16:33:04.633704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.633960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.633987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.634182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.634223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.634431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.634456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.634684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.634710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.634890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.634915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.635150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.635175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.635419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.635444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.635614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.635659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.635828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.635856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.636061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.636102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.636312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.636338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.979 qpair failed and we were unable to recover it. 00:34:21.979 [2024-07-15 16:33:04.636591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.979 [2024-07-15 16:33:04.636616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.636847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.636874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.637062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.637103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.637281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.637307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.637482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.637508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.637776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.637803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.638034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.638060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.638319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.638344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.638605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.638631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.638851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.638878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.639078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.639104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.639320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.639345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.639552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.639578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.639845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.980 [2024-07-15 16:33:04.639873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.980 qpair failed and we were unable to recover it. 00:34:21.980 [2024-07-15 16:33:04.640056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.640083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.640275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.640300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.640511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.640538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.640775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.640801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.641058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.641084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.641251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.641277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.641527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.641553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.641814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.641840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.642070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.642096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.642324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.642350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.642569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.642595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.642783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.642809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.642972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.642999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.981 [2024-07-15 16:33:04.643234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.981 [2024-07-15 16:33:04.643259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.981 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.643476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.643502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.643714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.643768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.644011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.644051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.644258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.644287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.644537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.644566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.644755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.982 [2024-07-15 16:33:04.644793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.982 qpair failed and we were unable to recover it. 00:34:21.982 [2024-07-15 16:33:04.645031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.645057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.645276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.645305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.645505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.645535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.645777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.645805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.645942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.645967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.646181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.646210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.646460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.646503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.646671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.646697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.646854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.646881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:21.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 479353 Killed "${NVMF_APP[@]}" "$@" 00:34:21.983 qpair failed and we were unable to recover it. 00:34:21.983 [2024-07-15 16:33:04.647066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.983 [2024-07-15 16:33:04.647122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 [2024-07-15 16:33:04.647300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.647332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:21.984 [2024-07-15 16:33:04.647531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.647559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 [2024-07-15 16:33:04.647791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.647819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:21.984 [2024-07-15 16:33:04.647980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.648019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:21.984 [2024-07-15 16:33:04.648224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.648254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:21.984 [2024-07-15 16:33:04.648457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.648483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.984 [2024-07-15 16:33:04.648708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.648770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 [2024-07-15 16:33:04.648941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.648967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 [2024-07-15 16:33:04.649140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.649181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.984 [2024-07-15 16:33:04.649442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.984 [2024-07-15 16:33:04.649468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.984 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.649695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.649724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.985 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.649867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.649893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.985 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.650119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.650146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.985 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.650335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.650360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.985 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.650581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.650634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.985 qpair failed and we were unable to recover it. 00:34:21.985 [2024-07-15 16:33:04.650826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.985 [2024-07-15 16:33:04.650853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.650968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.651001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.651163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.651194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.651407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.651453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.651657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.651683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.651878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.651905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.652073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.652099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.652319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.652345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.652605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.652631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.652837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.652864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.653065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.653106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=479992 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 479992 00:34:21.986 [2024-07-15 16:33:04.653339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.653366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 479992 ']' 00:34:21.986 [2024-07-15 16:33:04.653600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.986 [2024-07-15 16:33:04.653627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:21.986 [2024-07-15 16:33:04.653834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.653861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:21.986 [2024-07-15 16:33:04.654046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.654070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 16:33:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.986 [2024-07-15 16:33:04.654316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.654357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.654532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.654556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.654715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.654748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.654884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.654909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.986 [2024-07-15 16:33:04.655062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.986 [2024-07-15 16:33:04.655086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.986 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.655226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.655249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.655383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.655409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.655572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.655600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.655775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.655801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.655924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.655952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.656103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.656128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.656278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.656307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.656486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.656511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.656658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.656687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.656850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.656878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.657909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.657934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.658966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.658993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.659116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.659155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.659304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.659333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.659523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.659547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.659721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.659773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.659890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.659916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.660898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.660924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.661858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.661884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.662914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.662940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.663087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.663127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.663290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.663330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.663499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.663523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.663688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.663714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.663871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.663898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.664917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.664942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.665065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.665091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.665257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.665283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.987 qpair failed and we were unable to recover it. 00:34:21.987 [2024-07-15 16:33:04.665421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.987 [2024-07-15 16:33:04.665447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.665582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.665622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.665763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.665790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.665898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.665925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.666891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.666916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.667900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.667925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.668922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.668948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.669906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.669932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.670908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.670934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.671934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.671960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.672882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.672922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.673952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.673992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.674136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.674164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.674299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.674325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.674475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.674501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.674615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.988 [2024-07-15 16:33:04.674641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.988 qpair failed and we were unable to recover it. 00:34:21.988 [2024-07-15 16:33:04.674752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.674782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.674952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.674978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.675934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.675960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.676973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.676999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.677902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.677927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.678943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.678969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.679127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.679184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.679323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.679351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.679489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.679518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.679676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.679701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.679841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.989 [2024-07-15 16:33:04.679886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.989 qpair failed and we were unable to recover it. 00:34:21.989 [2024-07-15 16:33:04.680033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.990 [2024-07-15 16:33:04.680059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.990 qpair failed and we were unable to recover it. 00:34:21.990 [2024-07-15 16:33:04.680169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.990 [2024-07-15 16:33:04.680195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.990 qpair failed and we were unable to recover it. 00:34:21.990 [2024-07-15 16:33:04.680341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.990 [2024-07-15 16:33:04.680366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.990 qpair failed and we were unable to recover it. 00:34:21.990 [2024-07-15 16:33:04.680468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.990 [2024-07-15 16:33:04.680492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.990 qpair failed and we were unable to recover it. 00:34:21.990 [2024-07-15 16:33:04.680685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.990 [2024-07-15 16:33:04.680714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.990 qpair failed and we were unable to recover it. 00:34:21.990 [2024-07-15 16:33:04.680866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.680899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.681064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.681197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.681348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.681501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.991 [2024-07-15 16:33:04.681672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.991 qpair failed and we were unable to recover it. 00:34:21.991 [2024-07-15 16:33:04.681797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.681824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.681933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.681959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.682952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.682977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.683130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.683155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.683279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.683321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.992 [2024-07-15 16:33:04.683420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.992 [2024-07-15 16:33:04.683446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.992 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.683594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.683619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.683749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.683776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.683907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.683936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.684892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.684918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.685034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.993 [2024-07-15 16:33:04.685059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.993 qpair failed and we were unable to recover it. 00:34:21.993 [2024-07-15 16:33:04.685208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.685236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.685372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.685401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.685572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.685600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.685732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.685784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.685901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.685927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.686063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.994 [2024-07-15 16:33:04.686091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.994 qpair failed and we were unable to recover it. 00:34:21.994 [2024-07-15 16:33:04.686216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.686256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.686377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.686402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.686552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.686581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.686709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.686744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.686884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.686911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.687033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.687073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.687241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.995 [2024-07-15 16:33:04.687269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.995 qpair failed and we were unable to recover it. 00:34:21.995 [2024-07-15 16:33:04.687375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.687407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.687522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.687547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.687698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.687749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.687911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.687940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.688055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.688084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.688222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.688246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.688342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.688366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.688509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.996 [2024-07-15 16:33:04.688537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.996 qpair failed and we were unable to recover it. 00:34:21.996 [2024-07-15 16:33:04.688672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.688700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.688851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.688892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.688999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.689969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.997 [2024-07-15 16:33:04.689997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.997 qpair failed and we were unable to recover it. 00:34:21.997 [2024-07-15 16:33:04.690139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.690182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.690310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.690351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.690464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.690493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.690627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.690655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.690816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.690842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.690990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.691030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.691144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.691172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.691309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.691338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.691499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.998 [2024-07-15 16:33:04.691522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.998 qpair failed and we were unable to recover it. 00:34:21.998 [2024-07-15 16:33:04.691633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.691657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:21.999 [2024-07-15 16:33:04.691800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.691826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:21.999 [2024-07-15 16:33:04.691948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.691973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:21.999 [2024-07-15 16:33:04.692103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.692127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:21.999 [2024-07-15 16:33:04.692230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.692254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:21.999 [2024-07-15 16:33:04.692413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.999 [2024-07-15 16:33:04.692441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:21.999 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.692584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.692612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.692780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.692807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.692936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.692978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.693119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.693148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.693265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.693294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.000 [2024-07-15 16:33:04.693446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.000 [2024-07-15 16:33:04.693470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.000 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.693622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.693664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.693804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.693833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.693964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.693992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.694175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.694291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.694460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.694623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.001 [2024-07-15 16:33:04.694791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.001 qpair failed and we were unable to recover it. 00:34:22.001 [2024-07-15 16:33:04.694899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.694924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.695862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.695975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.002 [2024-07-15 16:33:04.696001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.002 qpair failed and we were unable to recover it. 00:34:22.002 [2024-07-15 16:33:04.696140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.696298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.696465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.696604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.696764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.696952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.696980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.697110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.697138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.697302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.697326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.003 [2024-07-15 16:33:04.697450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.003 [2024-07-15 16:33:04.697490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.003 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.697600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.697628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.697772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.697799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.697928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.697953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.698912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.698941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.004 [2024-07-15 16:33:04.699085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.004 [2024-07-15 16:33:04.699113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.004 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.699270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.699294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.699413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.699438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.699564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.699593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.699704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.699732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.699883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.699909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.700011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.700052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.700163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.700191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.700353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.700382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.005 qpair failed and we were unable to recover it. 00:34:22.005 [2024-07-15 16:33:04.700542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.005 [2024-07-15 16:33:04.700566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.700723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.700779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.700883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.700911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.701907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.701932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.702026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.702052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.702192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.702221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.006 qpair failed and we were unable to recover it. 00:34:22.006 [2024-07-15 16:33:04.702368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.006 [2024-07-15 16:33:04.702396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.702518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.702544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.702702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.702728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.702891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.702919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.703889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.703915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.704857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.704888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.705022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.007 [2024-07-15 16:33:04.705066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.007 qpair failed and we were unable to recover it. 00:34:22.007 [2024-07-15 16:33:04.705217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.705241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.705356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.705382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.705528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.705557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.705689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.705718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.705848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.705874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.705981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:22.008 [2024-07-15 16:33:04.706120] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.008 [2024-07-15 16:33:04.706120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.706921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.706947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.707946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.707971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.708934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.708960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.709966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.709991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.710938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.710964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1111b10 is same with the state(5) to be set 00:34:22.008 [2024-07-15 16:33:04.711257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.008 qpair failed and we were unable to recover it. 00:34:22.008 [2024-07-15 16:33:04.711966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.008 [2024-07-15 16:33:04.711992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.712135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.712311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.712464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.712624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.712840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.712983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.713966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.713991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.714162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.714345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.714550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.714707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.714878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.714988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.715872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.715978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.716938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.716963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.717925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.717952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.718964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.718990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.719134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.719161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.719318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.719342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.719458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.719484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.719697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.719726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.719862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.719888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.720930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.720956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.721056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.721097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.721242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.721268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.009 [2024-07-15 16:33:04.721439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.009 [2024-07-15 16:33:04.721467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.009 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.721603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.721629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.721782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.721809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.721941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.721966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.722895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.722921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.723883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.723981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.724892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.724917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.725936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.725962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.726894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.726920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.727928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.727954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.728923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.728950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.729963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.729988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.010 qpair failed and we were unable to recover it. 00:34:22.010 [2024-07-15 16:33:04.730118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.010 [2024-07-15 16:33:04.730143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.730295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.730433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.730567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.730727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.730863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.730977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.731963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.731987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.732872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.732899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.733871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.733897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.734850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.734878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.735939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.735965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.736083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.736123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.736327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.736352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.736478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.736503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.736633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.011 [2024-07-15 16:33:04.736658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.011 qpair failed and we were unable to recover it. 00:34:22.011 [2024-07-15 16:33:04.736780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.736807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.736936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.736962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.737210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.737377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.737557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.737689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.737863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.737977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.738840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.738974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.739122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.739301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.739468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.739656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.739880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.739908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.740832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.740986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.741929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.741954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.742971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.742997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.743140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.743169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.743300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.743349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.743501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.743527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.743665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.743690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.743866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.743892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.744897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.744924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.745072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.745236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.745412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.745537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.012 [2024-07-15 16:33:04.745688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.012 qpair failed and we were unable to recover it. 00:34:22.012 [2024-07-15 16:33:04.745813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.745840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.745945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.745971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.746121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.746300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.746499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.746675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.746849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.746982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.747008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.747164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.747188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.013 [2024-07-15 16:33:04.747328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.747367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.747568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.747606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.747751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.747778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.748102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.748363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.748498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.748677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.748835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.748992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.749963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.749990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.750185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.750367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.750572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.750762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.750882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.750984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.751178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.751372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.751571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.751793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.751925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.751951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.752090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.752129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.752279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.752303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.752446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.752472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.752732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.752766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.752879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.752905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.753887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.753913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.754010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.754036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.754163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.754187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.754420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.754445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.754588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.754626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.754845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.754873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.755883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.013 [2024-07-15 16:33:04.755909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.013 qpair failed and we were unable to recover it. 00:34:22.013 [2024-07-15 16:33:04.756056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.756233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.756417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.756625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.756790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.756923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.756948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.757077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.757102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.757220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.757244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.757374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.757401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.757585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.757611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.757830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.757857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.758847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.758888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.759962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.759988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.760155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.760195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.760380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.760405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.760584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.760610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.760786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.760813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.760921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.760947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.761105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.761290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.761448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.761638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.761812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.761975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.762030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.762154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.762180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.762408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.762432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.762642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.762666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.762812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.762838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.762997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.763037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.763168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.763191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.763344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.014 [2024-07-15 16:33:04.763369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.014 qpair failed and we were unable to recover it. 00:34:22.014 [2024-07-15 16:33:04.763523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.763564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.763690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.763728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.763904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.763929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.764937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.764963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.765112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.765151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.765347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.765371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.765564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.765589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.765768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.765809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.765949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.765974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.766143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.766293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.766471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.766646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.766816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.766974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.767858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.767987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.768013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.768205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.768229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.768396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.768420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.768655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.768679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.768819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.768845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.768984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.769140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.769358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.769552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.769735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.769919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.769945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.770085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.770252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.770483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.770668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.770844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.770974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.771167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.771361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.771555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.771723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.771854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.771879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.015 [2024-07-15 16:33:04.772898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.015 [2024-07-15 16:33:04.772924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.015 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.773123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.773312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.773518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.773662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.773844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.773975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.774915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.774940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.775929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.775954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.776911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.776938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.777967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.777993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.778158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.778183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.778326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.778352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.778471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.778497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.778654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.778693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.778875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.778903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.779907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.779934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.780065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.780090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.780233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.780273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.780413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.780437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.780584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.016 [2024-07-15 16:33:04.780609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.016 qpair failed and we were unable to recover it. 00:34:22.016 [2024-07-15 16:33:04.780747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.780773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.780899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.780925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.781928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.781953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.782907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.782933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.783083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.783287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.783486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.783691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.783847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.783977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.784150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.784306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.784387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.017 [2024-07-15 16:33:04.784482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.784650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.784839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.784866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.785890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.785916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.786873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.786898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.787915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.787941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.017 [2024-07-15 16:33:04.788057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.017 [2024-07-15 16:33:04.788097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.017 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.788242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.788282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.788428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.788467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.788605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.788630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.788806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.788833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.788992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.789150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.789314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.789516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.789684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.789875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.789902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.790038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.790063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.790225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.790249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.790433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.790476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.790639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.790664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.790855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.790882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.791938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.791964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.792194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.792218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.792372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.792395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.792608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.792633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.792817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.792844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.792972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.792998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.793165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.793205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.018 qpair failed and we were unable to recover it. 00:34:22.018 [2024-07-15 16:33:04.793364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.018 [2024-07-15 16:33:04.793389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.793592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.793616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.793762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.793788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.793953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.793989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.794172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.794196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.794325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.794363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.794491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.794516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.794658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.794683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.794875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.794902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.795098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.795138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.795318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.795346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.795528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.795553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.795728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.795776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.795947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.795973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.796103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.796129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.796269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.796313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.796492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.796517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.796674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.796699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.796895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.796935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.797072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.797112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.797328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.797353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.797565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.797589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.797782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.797809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.797975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.798101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.798300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.798470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.798648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.798865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.798892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.799869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.799895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.800009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.800035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.800200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.800226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.800426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.800450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.800647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.800672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.800873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.800900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.801015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.801055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.019 qpair failed and we were unable to recover it. 00:34:22.019 [2024-07-15 16:33:04.801235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.019 [2024-07-15 16:33:04.801260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.801422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.801447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.801603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.801629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.801804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.801830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.802046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.802072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.802215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.802239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.802384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.802409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.802554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.802595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.802827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.802854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.803940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.803966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.804108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.804296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.804478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.804694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.804853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.804982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.805008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.805201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.805226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.805424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.805449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.805594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.805619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.805854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.805885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.806045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.806070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.806244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.806280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.806421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.806447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.806600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.806638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.806834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.806861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.807950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.807976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.808103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.808143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.808274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.808300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.808458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.808484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.808610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.808636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.808835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.808876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.809902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.809930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.810086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.810112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.810352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.810376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.810539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.810563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.810751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.810778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.020 qpair failed and we were unable to recover it. 00:34:22.020 [2024-07-15 16:33:04.810972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.020 [2024-07-15 16:33:04.811001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.811180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.811205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.811319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.811359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.811523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.811562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.811685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.811710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.811878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.811904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.812057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.812098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.812255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.812280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.812439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.812464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.812661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.812687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.812853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.812893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.813056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.813083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.813230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.813256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.813451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.813475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.813629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.813654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.813795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.813821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.814047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.814071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.814219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.814244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.814391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.814430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.814649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.814685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.814833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.814860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.815959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.815986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.816137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.816163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.816292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.816332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.816498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.816523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.816648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.816674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.816896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.816931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.817079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.817104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.817305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.817338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.817513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.817538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.817657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.817682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.817806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.817833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.818925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.818952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.819144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.819169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.819315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.819355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.819591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.819616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.819735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.819765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.819885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.819911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.820062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.820103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.820292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.820317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.820484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.820509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.820744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.820782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.820923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.820949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.821078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.821118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.821302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.821327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.821471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.821510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.821696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.821743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.821864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.821902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.021 [2024-07-15 16:33:04.822055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.021 [2024-07-15 16:33:04.822080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.021 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.822249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.822283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.822412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.822437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.822604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.822644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.822797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.822824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.822935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.822961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.823105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.823246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.823424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.823646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.823844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.823984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.824035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.824184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.824208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.824379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.824403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.824588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.824612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.824776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.824815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.825047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.825071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.825257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.825282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.825433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.825459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.825676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.825702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.825860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.825887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.826936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.826962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.827120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.827160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.827308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.827333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.827526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.827551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.827670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.827719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.827904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.827930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.828053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.828092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.828284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.828308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.828485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.828509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.828684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.828708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.828899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.828924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.829127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.829153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.829307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.829347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.829468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.829507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.829669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.829695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.829874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.829901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.830066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.830090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.830312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.830348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.830512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.830536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.830676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.830714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.830873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.830903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.831146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.831170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.831307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.831332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.831471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.831516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.831698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.831722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.831951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.831985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.832151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.832176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.832347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.832372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.832596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.832621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.832746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.832771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.832975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.833001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.833186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.833211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.022 [2024-07-15 16:33:04.833322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.022 [2024-07-15 16:33:04.833347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.022 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.833482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.833506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.833654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.833693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.833828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.833864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.834073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.834097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.834355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.834380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.834567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.834608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.834766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.834806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.834986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.835168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.835375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.835519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.835696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.835897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.835939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.836073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.836097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.836287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.836312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.836456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.836481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.836699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.836723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.836853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.836878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.837060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.837110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.837295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.837320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.837481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.837505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.837745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.837771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.837913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.837938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.838178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.838203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.838319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.838343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.838520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.838559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.838734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.838790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.838960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.838986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.839156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.839181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.839320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.839345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.839482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.839511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.839691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.839715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.839901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.839927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.840057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.840083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.840226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.840265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.840403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.840428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.840582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.840622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.840777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.840819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.841059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.841239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.841432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.841633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.841833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.841997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.842021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.842191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.842215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.842372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.842397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.842602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.842627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.842850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.842876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.842993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.843018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.843270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.843295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.843476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.843500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.843618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.843658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.843790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.843816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.843993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.844178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.844413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.844559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.844790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.844953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.844979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.845113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.845154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.845334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.845359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.845514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.845538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.845720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.845749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.845867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.845893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.846852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.846879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.847033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.847064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.847202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.847226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.847346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.847371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.847496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.023 [2024-07-15 16:33:04.847520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.023 qpair failed and we were unable to recover it. 00:34:22.023 [2024-07-15 16:33:04.847668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.847710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.847859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.847885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.848850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.848979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.849137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.849327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.849512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.849694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.849889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.849915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.850883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.850909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.851861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.851993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.852152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.852350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.852571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.852743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.852910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.852936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.853911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.853937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.854944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.854970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.855902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.855943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.856909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.856936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.857071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.857096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.857206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.024 [2024-07-15 16:33:04.857246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.024 qpair failed and we were unable to recover it. 00:34:22.024 [2024-07-15 16:33:04.857356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.857381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.857493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.857518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.857689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.857729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.857883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.857909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.858872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.858898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.859962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.859988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.860941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.860967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.861154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.861312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.861482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.861659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.861820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.861979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.862950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.862975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.863914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.863940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.864102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.864288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.864478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.864628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.864828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.864982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.865838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.865997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.866162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.866346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.866544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.866734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.866885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.866911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.867048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.867076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.867208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.867247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.867388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.025 [2024-07-15 16:33:04.867413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.025 qpair failed and we were unable to recover it. 00:34:22.025 [2024-07-15 16:33:04.867555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.867580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.867715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.867746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.867869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.867895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.868898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.868924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.869906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.869932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.870895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.870921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.871892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.871918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.872887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.872913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.873974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.873999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.874210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.874364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.874530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.874666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.874803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.874975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.875949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.875975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.876988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.877100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.877125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.877270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.877295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.877424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.877450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.026 [2024-07-15 16:33:04.877566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.026 [2024-07-15 16:33:04.877591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.026 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.877750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.877792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.877900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.877926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.878934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.878961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.879939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.879965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.880866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.880892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.881868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.881999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.882166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.882336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.882489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.882669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.882878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.882904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.883928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.883953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.884905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.884947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.885951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.885977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.886112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.886137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.886273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.027 [2024-07-15 16:33:04.886299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.027 qpair failed and we were unable to recover it. 00:34:22.027 [2024-07-15 16:33:04.886447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.886487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.886647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.886672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.886786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.886813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.886957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.887146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.887338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.887533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.887702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.887851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.887877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.888871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.888897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.889857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.889998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.890951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.890977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.891092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.891118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.891251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.891277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.891468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.891512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.891671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.891712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.891853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.891895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.028 [2024-07-15 16:33:04.892127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.028 [2024-07-15 16:33:04.892142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.028 [2024-07-15 16:33:04.892154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.028 [2024-07-15 16:33:04.892164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.028 [2024-07-15 16:33:04.892171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:22.028 [2024-07-15 16:33:04.892386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:22.028 [2024-07-15 16:33:04.892467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:22.028 [2024-07-15 16:33:04.892469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:22.028 [2024-07-15 16:33:04.892549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.892903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.892929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.893859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.893993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.894870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.894910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.895909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.028 [2024-07-15 16:33:04.895949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.028 qpair failed and we were unable to recover it. 00:34:22.028 [2024-07-15 16:33:04.896061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.896198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.896382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.896525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.896683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.896846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.896887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.897832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.897861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.898948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.898976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.899881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.899986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.900918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.900947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.901917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.901943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.902845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.902971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.903823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.903983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.904010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.904187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.904224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.904413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.904439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.904573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.904598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.904779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.904806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.905880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.029 [2024-07-15 16:33:04.905907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.029 qpair failed and we were unable to recover it. 00:34:22.029 [2024-07-15 16:33:04.906041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.906930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.907873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.907900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.908957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.908997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.909935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.909961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.910863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.910975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.911892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.911918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.912867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.912894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.913928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.913956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.914117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.914144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.914278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.914304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.914433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.914459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.914560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.914586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.030 [2024-07-15 16:33:04.914695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.030 [2024-07-15 16:33:04.914722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.030 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.914868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.914895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.915948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.915974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.916864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.916890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.917934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.917959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.918845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.918886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.919873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.919898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.920953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.031 [2024-07-15 16:33:04.920980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.031 qpair failed and we were unable to recover it. 00:34:22.031 [2024-07-15 16:33:04.921143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.921169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.921303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.921329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.921483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.921509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.921645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.921672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.921825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.921851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.921979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.922129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.922251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.922387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.922544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.032 [2024-07-15 16:33:04.922702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.032 [2024-07-15 16:33:04.922728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.032 qpair failed and we were unable to recover it. 00:34:22.301 [2024-07-15 16:33:04.922886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.301 [2024-07-15 16:33:04.922912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.301 qpair failed and we were unable to recover it. 00:34:22.301 [2024-07-15 16:33:04.923031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.301 [2024-07-15 16:33:04.923072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.301 qpair failed and we were unable to recover it. 00:34:22.301 [2024-07-15 16:33:04.923209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.301 [2024-07-15 16:33:04.923237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.301 qpair failed and we were unable to recover it. 00:34:22.301 [2024-07-15 16:33:04.923334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.301 [2024-07-15 16:33:04.923360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.301 qpair failed and we were unable to recover it. 00:34:22.301 [2024-07-15 16:33:04.923492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.923519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.923647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.923673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.923784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.923811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.923907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.923934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.924879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.924905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.925875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.925901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.926874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.926900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.927055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.927248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.927399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.927528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.302 [2024-07-15 16:33:04.927687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.302 qpair failed and we were unable to recover it. 00:34:22.302 [2024-07-15 16:33:04.927815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.927842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.928867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.928896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.929888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.929915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1103fa0 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.930908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.930936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.931890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.931916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.303 [2024-07-15 16:33:04.932919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.303 [2024-07-15 16:33:04.932945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.303 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.933138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.933169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.933355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.933381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.933530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.933556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.933688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.933714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.933851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.933883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.934043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.934070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.934200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.934226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.934453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.934488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.934710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.934736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.934882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.934908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.935889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.935915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.936132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.936292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.936522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.936695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.936836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.936977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.937156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.937340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.937498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.937681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.937841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.937868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.938032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.938057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.304 [2024-07-15 16:33:04.938276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.304 [2024-07-15 16:33:04.938302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.304 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.938468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.938494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.938629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.938655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.938788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.938815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.938967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.938993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.939168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.939291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.939477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.939630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.939812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.939983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.940911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.940941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.941945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.941971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.942889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.942916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.943043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.943070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.305 qpair failed and we were unable to recover it. 00:34:22.305 [2024-07-15 16:33:04.943300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.305 [2024-07-15 16:33:04.943326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.943422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.943448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.943558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.943584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.943756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.943783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.943937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.943963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.944187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.944224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.944389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.944415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.944542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.944568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.944727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.944760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.944891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.944917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.945942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.945968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.946101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.946127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.946259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.946285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.946442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.946468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.946599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.946624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.946806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.946833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.947835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.947866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.948066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.948096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.948255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.948281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.948411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.948437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.306 qpair failed and we were unable to recover it. 00:34:22.306 [2024-07-15 16:33:04.948536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.306 [2024-07-15 16:33:04.948562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.948720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.948753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.948850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.948876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.948978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.949170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.949324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.949526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.949680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.949891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.949918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.950064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.950090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.950262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.950288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.950457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.950483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.950655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.950681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.950890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.950917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.951066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.951092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.951241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.951268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.951455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.951481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.951636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.951662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.951873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.952029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.952055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.952225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.952252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.952384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.952410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.952578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.952604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.952825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.952851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.953009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.953036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.953165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.953191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.307 [2024-07-15 16:33:04.953322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.307 [2024-07-15 16:33:04.953348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.307 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.953484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.953510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.953664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.953690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.953875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.953913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.954924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.954950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.955106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.955137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.955297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.955323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.955516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.955542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.955750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.955776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.955932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.955958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.956957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.956983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.957098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.957124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.957279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.957305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.957475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.957501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.957690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.957716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.957897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.957923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.958924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.958950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.959107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.959133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.308 qpair failed and we were unable to recover it. 00:34:22.308 [2024-07-15 16:33:04.959243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.308 [2024-07-15 16:33:04.959269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.959408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.959434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.959563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.959590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.959750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.959776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.959961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.959986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.960966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.960991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.961178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.961204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.961334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.961360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.961523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.961549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.961680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.961706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.961871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.961898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.962071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.962111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.962279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.962306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.962466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.962491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.962649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.962675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.962898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.962924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.963926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.963953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.964858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.964885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.309 [2024-07-15 16:33:04.965018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.309 [2024-07-15 16:33:04.965044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.309 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.965200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.965326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.965352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.965532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.965558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.965721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.965754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.965893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.965919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.966840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.966867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.967882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.967908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.968165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.968296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.968499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.968702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.968840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.968972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.969939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.969966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.310 qpair failed and we were unable to recover it. 00:34:22.310 [2024-07-15 16:33:04.970094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.310 [2024-07-15 16:33:04.970120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.970254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.970280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.970439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.970465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.970611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.970638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.970766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.970793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.970940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.970966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.971115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.971344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.971545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.971693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.971824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.971989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.972145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.972308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.972450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.972614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.972797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.972824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.973870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.973999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.974185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.974394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.974530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.974688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.974888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.974934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.975070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.975098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.975293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.975319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.975510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.311 [2024-07-15 16:33:04.975536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.311 qpair failed and we were unable to recover it. 00:34:22.311 [2024-07-15 16:33:04.975638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.975665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.975781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.975814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.975975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.976823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.976986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.977820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.977981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.978186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.978376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.978601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.978727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.978889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.978915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.979063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.979246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.979424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.979578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.979774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.979979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.980128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.980294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.980492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.980726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.312 [2024-07-15 16:33:04.980909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.312 [2024-07-15 16:33:04.980935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.312 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.981080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.981105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.981257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.981282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.981464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.981490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.981661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.981687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.981882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.981923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.982925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.982958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.983887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.983996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.984951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.984978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.985147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.985184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.985322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.985347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.985449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.985475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.313 [2024-07-15 16:33:04.985607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.313 [2024-07-15 16:33:04.985632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.313 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.985803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.985830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.985990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.986148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.986367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.986547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.986762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.986893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.986919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.987896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.987922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.988824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.988864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.989875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.989901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.990860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.314 [2024-07-15 16:33:04.990900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.314 qpair failed and we were unable to recover it. 00:34:22.314 [2024-07-15 16:33:04.991010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.991250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.991409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.991596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.991755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.991925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.991952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.992149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.992186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.992373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.992400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.992535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.992561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.992682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.992708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.992856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.992883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.993834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.993861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.994873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.994900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.995084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.995110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.995266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.995304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.995543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.995580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.995778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.995804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.995959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.995986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.996106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.996132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.996343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.315 [2024-07-15 16:33:04.996369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.315 qpair failed and we were unable to recover it. 00:34:22.315 [2024-07-15 16:33:04.996515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.996542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.996733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.996766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.996949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.996976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.997945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.997971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.998178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.998204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.998378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.998404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.998540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.998566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.998693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.998731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.998882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.998909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.999064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.999090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.999238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.999265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.999440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.999477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.999647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.999673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:04.999892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:04.999919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.000891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.000917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.316 qpair failed and we were unable to recover it. 00:34:22.316 [2024-07-15 16:33:05.001904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.316 [2024-07-15 16:33:05.001930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.002089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.002116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.002295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.002321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.002475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.002501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.002706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.002732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.002862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.002889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.003106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.003274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.003457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.003697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.003856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.003981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.004007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.004197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.004224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.004372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.004398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.004560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.004586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.004699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.004736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.007844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.007886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.008910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.008936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.009845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.009871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.010003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.010029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.010163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.010189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.010352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.010378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.317 qpair failed and we were unable to recover it. 00:34:22.317 [2024-07-15 16:33:05.010491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.317 [2024-07-15 16:33:05.010517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.010707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.010751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.010889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.010915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.011919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.011945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.012953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.012980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.013870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.013897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.014027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.014053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.014252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.014278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.014435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.318 [2024-07-15 16:33:05.014461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.318 qpair failed and we were unable to recover it. 00:34:22.318 [2024-07-15 16:33:05.014641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.014678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.014853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.014880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.015933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.015959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.016111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.016276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.016450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.016636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.016788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.016980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.017905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.017932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.018129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.018155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.018332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.018358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.018540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.018570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.018724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.018759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.018996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.019219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.019362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.019544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.019800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.019958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.019984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.020089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.319 [2024-07-15 16:33:05.020115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.319 qpair failed and we were unable to recover it. 00:34:22.319 [2024-07-15 16:33:05.020250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.020276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.020420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.020456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.020643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.020669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.020889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.020915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.021951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.021977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.022905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.022931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.023890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.023917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.024926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.024952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.025109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.025135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.025235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.025261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.025470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.025496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.320 qpair failed and we were unable to recover it. 00:34:22.320 [2024-07-15 16:33:05.025645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.320 [2024-07-15 16:33:05.025676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.025878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.025905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.026032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.026058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.026225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.026251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.026422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:22.321 [2024-07-15 16:33:05.026451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:22.321 [2024-07-15 16:33:05.026620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.026647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:22.321 [2024-07-15 16:33:05.026818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.026846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.321 [2024-07-15 16:33:05.026973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.321 [2024-07-15 16:33:05.027000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.027157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.027184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.027327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.027356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.027481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.027507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.027674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.027701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.027847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.027874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.028874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.028985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.029170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.029421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.029596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.029752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.029895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.029922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.030897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.030923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.031040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.321 [2024-07-15 16:33:05.031067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.321 qpair failed and we were unable to recover it. 00:34:22.321 [2024-07-15 16:33:05.031189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.031215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.031396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.031422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.031551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.031579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.031712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.031748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.031902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.031929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.032889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.032916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.033899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.033926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.034885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.034987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.322 [2024-07-15 16:33:05.035744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.322 qpair failed and we were unable to recover it. 00:34:22.322 [2024-07-15 16:33:05.035904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.035930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.036970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.036996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.037181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.037339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.037547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.037729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.037864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.037985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.038908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.038934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.039856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.039884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.040910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.040936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.323 [2024-07-15 16:33:05.041105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.323 [2024-07-15 16:33:05.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.323 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.041267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.041294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.041428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.041455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.041612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.041638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.041775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.041802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.041934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.041961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.042923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.042950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.043870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.043897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.044011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.044037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.044188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.044215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.044334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.044362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.324 [2024-07-15 16:33:05.044513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.324 [2024-07-15 16:33:05.044540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.324 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.044666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.044692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.044800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.044827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.044934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.044965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.325 [2024-07-15 16:33:05.045102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.325 [2024-07-15 16:33:05.045301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.325 [2024-07-15 16:33:05.045504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.045667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.045810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.045936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.045962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.046903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.046930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.047875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.047985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.048125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.048278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.048435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.048598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.048848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.048888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.049029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.049062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.049219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.049246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.049439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.325 [2024-07-15 16:33:05.049465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.325 qpair failed and we were unable to recover it. 00:34:22.325 [2024-07-15 16:33:05.049617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.049643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.049749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.049776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.049911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.049939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.050906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.050933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.051877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.051990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.052169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.052425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.052645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.052798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.052925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.052951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.053151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.053177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.053383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.053410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.053560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.053586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.053748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.053775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.053885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.053911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.054012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.054038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.054156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.054182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.054303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.054329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.326 [2024-07-15 16:33:05.054441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.326 [2024-07-15 16:33:05.054467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.326 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.054616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.054642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.054764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.054803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.054927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.055080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.055265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.055488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.055724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.055871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.055979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.056203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.056374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.056552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.056709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.056843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.056870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.057916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.057943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.058138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.058175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.058308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.058344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.058470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.058497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.058733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.058776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.058908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.058935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.327 qpair failed and we were unable to recover it. 00:34:22.327 [2024-07-15 16:33:05.059924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.327 [2024-07-15 16:33:05.059950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.060213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.060239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.060375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.060402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.060535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.060561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.060693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.060720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.060861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.060905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.061879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.061905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.062883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.062993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.063147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.063309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.063455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.063669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.063845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.063887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.328 [2024-07-15 16:33:05.064962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.328 [2024-07-15 16:33:05.064988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.328 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.065216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.065246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.065415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.065441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.065640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.065666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.065796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.065824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.065982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.066971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.066998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.067225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.067256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.067361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.067387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.067740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.067776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.067912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.067938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.068924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.068949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.069092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.069118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.069342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.069368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 Malloc0 00:34:22.329 [2024-07-15 16:33:05.069583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.069618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.069898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.329 [2024-07-15 16:33:05.069925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 [2024-07-15 16:33:05.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:22.329 [2024-07-15 16:33:05.070088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.329 [2024-07-15 16:33:05.070252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.329 [2024-07-15 16:33:05.070279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.329 qpair failed and we were unable to recover it. 00:34:22.329 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.330 [2024-07-15 16:33:05.070445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.070471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.070640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.070666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.070849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.070876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.070977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.071220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.071350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.071559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.071717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.071889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.071916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.072854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.072880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.330 [2024-07-15 16:33:05.073379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.073950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.073976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.074154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.074190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.074321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.074346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.074478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.074504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.330 [2024-07-15 16:33:05.074670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.330 [2024-07-15 16:33:05.074697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.330 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.074833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.074874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.075837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.075865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.076887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.076989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.077918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.077944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.078970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.078996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.079154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.079180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.079336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.079362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.079492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.079518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.079650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.331 [2024-07-15 16:33:05.079676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.331 qpair failed and we were unable to recover it. 00:34:22.331 [2024-07-15 16:33:05.079844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.079870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.079980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.080891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.080917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.081071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.081222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.081362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.332 [2024-07-15 16:33:05.081499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.332 [2024-07-15 16:33:05.081625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.332 [2024-07-15 16:33:05.081782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.332 [2024-07-15 16:33:05.081939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.081966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.082126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.082153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.082276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.082303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.082457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.082483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.082637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.082664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.082817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.082844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.083958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.083984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.084112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.084138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.332 qpair failed and we were unable to recover it. 00:34:22.332 [2024-07-15 16:33:05.084292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.332 [2024-07-15 16:33:05.084318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.084446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.084472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.084626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.084652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.084774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.084801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.084909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.084935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.085890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.085917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.086855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.086881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.087856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.087895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.088955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.333 [2024-07-15 16:33:05.088981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.333 qpair failed and we were unable to recover it. 00:34:22.333 [2024-07-15 16:33:05.089104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.089131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.089233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.089259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.089426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.089452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.334 [2024-07-15 16:33:05.089609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.334 [2024-07-15 16:33:05.089636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.334 [2024-07-15 16:33:05.089789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.089816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.334 [2024-07-15 16:33:05.089971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.089998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.090881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.090918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.091936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.091962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.092101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.092127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.092265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.092291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.092429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.092455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.092620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.092646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.092814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.092854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.093942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.093968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.334 [2024-07-15 16:33:05.094120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.334 [2024-07-15 16:33:05.094146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.334 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.094296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.094323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7de4000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.094472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.094499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.094646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.094672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.094807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.094833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.094931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.094957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.095924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.095950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.096870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.096897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.097055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.097081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.097215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.097241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.097402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.097429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.335 [2024-07-15 16:33:05.097535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.097564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.097673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.335 [2024-07-15 16:33:05.097700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.335 [2024-07-15 16:33:05.097867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.097894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.335 [2024-07-15 16:33:05.098054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.335 [2024-07-15 16:33:05.098080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.335 qpair failed and we were unable to recover it. 00:34:22.335 [2024-07-15 16:33:05.098213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.098240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.098374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.098400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.098531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.098558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.098665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.098692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.098826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.098853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.098987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.099886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.099913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.100897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.100924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.101031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.101057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.101187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.101213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.101313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.336 [2024-07-15 16:33:05.101339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ddc000b90 with addr=10.0.0.2, port=4420 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 [2024-07-15 16:33:05.101464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.336 [2024-07-15 16:33:05.103945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.336 [2024-07-15 16:33:05.104077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.336 [2024-07-15 16:33:05.104104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.336 [2024-07-15 16:33:05.104120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.336 [2024-07-15 16:33:05.104133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.336 [2024-07-15 16:33:05.104167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.336 16:33:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 479397 00:34:22.336 [2024-07-15 16:33:05.113794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.336 [2024-07-15 16:33:05.113904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.336 [2024-07-15 16:33:05.113932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.336 [2024-07-15 16:33:05.113948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.336 [2024-07-15 16:33:05.113961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.336 [2024-07-15 16:33:05.113991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.336 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.123864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.123963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.123990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.124005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.124019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.124048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.133819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.133929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.133956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.133971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.133984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.134014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.143845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.143948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.143975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.143991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.144009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.144040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.153876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.153985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.154024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.154040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.154052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.154082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.163880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.163979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.164006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.164025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.164038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.164069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.173960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.174077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.174105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.174120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.174132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.174173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.183934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.184048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.184075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.184090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.184103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.184133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.193962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.194096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.194123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.194138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.194151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.194181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.203968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.204075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.204102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.204117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.204130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.204160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.214040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.214154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.214181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.214196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.214209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.214238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.224067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.224169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.224197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.224212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.224224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.224254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.234094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.234190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.234217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.234237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.234251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.234281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.244115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.244216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.244242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.244258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.244271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.244300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.254113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.254222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.254248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.254263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.254276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.254306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.337 [2024-07-15 16:33:05.264249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.337 [2024-07-15 16:33:05.264355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.337 [2024-07-15 16:33:05.264392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.337 [2024-07-15 16:33:05.264407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.337 [2024-07-15 16:33:05.264420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.337 [2024-07-15 16:33:05.264450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.337 qpair failed and we were unable to recover it. 00:34:22.597 [2024-07-15 16:33:05.274175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.597 [2024-07-15 16:33:05.274278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.597 [2024-07-15 16:33:05.274305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.597 [2024-07-15 16:33:05.274320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.597 [2024-07-15 16:33:05.274333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.597 [2024-07-15 16:33:05.274364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.597 qpair failed and we were unable to recover it. 00:34:22.597 [2024-07-15 16:33:05.284229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.597 [2024-07-15 16:33:05.284325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.284351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.284366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.284380] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.284410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.294237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.294342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.294369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.294384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.294396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.294427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.304300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.304404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.304432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.304447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.304459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.304489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.314321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.314417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.314444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.314459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.314472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.314502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.324307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.324407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.324438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.324455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.324467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.324497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.334331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.334437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.334463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.334477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.334490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.334520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.344388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.344487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.344514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.344529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.344542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.344577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.354439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.354541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.354568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.354583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.354597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.354626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.364470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.364563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.364591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.364606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.364619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.364655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.374475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.374580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.374608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.374623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.374636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.374666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.384486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.384591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.384618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.384633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.384646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.384676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.394552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.394653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.394681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.394697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.394710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.394748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.404570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.404672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.404699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.404713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.404726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.404765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.414559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.414666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.414701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.598 [2024-07-15 16:33:05.414717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.598 [2024-07-15 16:33:05.414750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.598 [2024-07-15 16:33:05.414782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.598 qpair failed and we were unable to recover it. 00:34:22.598 [2024-07-15 16:33:05.424650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.598 [2024-07-15 16:33:05.424761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.598 [2024-07-15 16:33:05.424788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.424803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.424816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.424846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.434607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.434707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.434733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.434757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.434770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.434800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.444623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.444723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.444768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.444783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.444796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.444826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.454674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.454794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.454821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.454836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.454854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.454885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.464726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.464861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.464887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.464903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.464916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.464945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.474706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.474823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.474850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.474865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.474878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.474908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.484760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.484858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.484885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.484900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.484913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.484943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.494796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.494898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.494923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.494938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.494951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.494981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.504833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.504940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.504967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.504982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.504994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.505024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.514866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.514971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.514997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.515013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.515026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.515055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.524886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.525022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.525049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.525064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.525077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.525107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.534902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.535006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.535031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.535046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.535058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.535087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.544964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.545066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.545093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.545108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.545125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.545157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.554982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.555082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.555108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.555124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.555137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.599 [2024-07-15 16:33:05.555166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.599 qpair failed and we were unable to recover it. 00:34:22.599 [2024-07-15 16:33:05.565016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.599 [2024-07-15 16:33:05.565109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.599 [2024-07-15 16:33:05.565135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.599 [2024-07-15 16:33:05.565150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.599 [2024-07-15 16:33:05.565163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.600 [2024-07-15 16:33:05.565192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.600 qpair failed and we were unable to recover it. 00:34:22.600 [2024-07-15 16:33:05.575058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.600 [2024-07-15 16:33:05.575164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.600 [2024-07-15 16:33:05.575191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.600 [2024-07-15 16:33:05.575206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.600 [2024-07-15 16:33:05.575219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.600 [2024-07-15 16:33:05.575248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.600 qpair failed and we were unable to recover it. 00:34:22.859 [2024-07-15 16:33:05.585065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.859 [2024-07-15 16:33:05.585166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.859 [2024-07-15 16:33:05.585193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.859 [2024-07-15 16:33:05.585208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.859 [2024-07-15 16:33:05.585221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.859 [2024-07-15 16:33:05.585250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.859 qpair failed and we were unable to recover it. 00:34:22.859 [2024-07-15 16:33:05.595107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.859 [2024-07-15 16:33:05.595205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.859 [2024-07-15 16:33:05.595232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.859 [2024-07-15 16:33:05.595246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.859 [2024-07-15 16:33:05.595259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.859 [2024-07-15 16:33:05.595289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.859 qpair failed and we were unable to recover it. 00:34:22.859 [2024-07-15 16:33:05.605123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.859 [2024-07-15 16:33:05.605235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.859 [2024-07-15 16:33:05.605260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.859 [2024-07-15 16:33:05.605275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.859 [2024-07-15 16:33:05.605288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.859 [2024-07-15 16:33:05.605328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.859 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.615188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.615292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.615318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.615333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.615346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.615375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.625179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.625280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.625306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.625321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.625334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.625363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.635210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.635307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.635332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.635353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.635367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.635398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.645202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.645296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.645322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.645337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.645350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.645379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.655291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.655409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.655434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.655450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.655463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.655493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.665334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.665433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.665460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.665475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.665488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.665518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.675334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.675434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.675460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.675475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.675488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.675517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.685365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.685461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.685486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.685501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.685514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.685544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.695415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.695563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.695589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.695604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.695616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.695656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.705387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.705499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.705526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.705540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.705553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.705583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.715418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.715516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.715541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.715556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.715569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.715599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.725477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.725573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.860 [2024-07-15 16:33:05.725605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.860 [2024-07-15 16:33:05.725621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.860 [2024-07-15 16:33:05.725633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.860 [2024-07-15 16:33:05.725663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.860 qpair failed and we were unable to recover it. 00:34:22.860 [2024-07-15 16:33:05.735522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.860 [2024-07-15 16:33:05.735629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.735655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.735670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.735683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.861 [2024-07-15 16:33:05.735713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.745527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.745649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.745675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.745690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.745703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.861 [2024-07-15 16:33:05.745750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.755534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.755633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.755660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.755674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.755688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ddc000b90 00:34:22.861 [2024-07-15 16:33:05.755717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.765587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.765694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.765725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.765750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.765765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.765802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.775672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.775804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.775843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.775858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.775870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.775909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.785653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.785767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.785794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.785810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.785822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.785853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.795678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.795786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.795814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.795829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.795841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.795871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.805702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.805821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.805847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.805862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.805875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.805906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.815781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.815887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.815918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.815934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.815947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.815977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.825784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.825884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.825910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.825926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.825938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.825969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:22.861 [2024-07-15 16:33:05.835791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:22.861 [2024-07-15 16:33:05.835898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:22.861 [2024-07-15 16:33:05.835925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:22.861 [2024-07-15 16:33:05.835941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:22.861 [2024-07-15 16:33:05.835953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:22.861 [2024-07-15 16:33:05.835984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:22.861 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.845825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.845926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.845953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.845969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.845982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.846012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.855887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.856037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.856062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.856078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.856090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.856134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.865887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.865993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.866020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.866036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.866048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.866078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.875944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.876046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.876072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.876088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.876100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.876133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.885938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.886081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.886108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.886123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.886136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.886176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.895993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.896135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.896160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.896174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.896188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.896218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.906113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.906233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.906257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.906272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.906285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.906326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.916056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.916165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.916191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.916206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.916220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.916250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.926062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.926159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.926186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.926201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.926213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.926243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.936141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.936254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.936281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.936296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.936308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.936338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.946071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.946200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.946226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.946241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.946260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.946291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.956142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.122 [2024-07-15 16:33:05.956238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.122 [2024-07-15 16:33:05.956265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.122 [2024-07-15 16:33:05.956280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.122 [2024-07-15 16:33:05.956292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.122 [2024-07-15 16:33:05.956331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.122 qpair failed and we were unable to recover it. 00:34:23.122 [2024-07-15 16:33:05.966119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.123 [2024-07-15 16:33:05.966222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.123 [2024-07-15 16:33:05.966247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.123 [2024-07-15 16:33:05.966261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.123 [2024-07-15 16:33:05.966273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.123 [2024-07-15 16:33:05.966303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.123 qpair failed and we were unable to recover it. 00:34:23.123 [2024-07-15 16:33:05.976220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.123 [2024-07-15 16:33:05.976370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.123 [2024-07-15 16:33:05.976394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.123 [2024-07-15 16:33:05.976409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.123 [2024-07-15 16:33:05.976421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.123 [2024-07-15 16:33:05.976450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.123 qpair failed and we were unable to recover it. 00:34:23.123 [2024-07-15 16:33:05.986222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:23.123 [2024-07-15 16:33:05.986328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:23.123 [2024-07-15 16:33:05.986353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:23.123 [2024-07-15 16:33:05.986367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:23.123 [2024-07-15 16:33:05.986381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7de4000b90 00:34:23.123 [2024-07-15 16:33:05.986410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:23.123 qpair failed and we were unable to recover it. 00:34:23.123 [2024-07-15 16:33:05.986447] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:23.123 A controller has encountered a failure and is being reset. 00:34:23.123 [2024-07-15 16:33:05.986507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1111b10 (9): Bad file descriptor 00:34:23.123 Controller properly reset. 00:34:28.382 Initializing NVMe Controllers 00:34:28.382 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:28.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:28.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:28.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:28.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:28.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:28.382 Initialization complete. Launching workers. 00:34:28.382 Starting thread on core 1 00:34:28.382 Starting thread on core 2 00:34:28.382 Starting thread on core 3 00:34:28.382 Starting thread on core 0 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:28.382 00:34:28.382 real 0m10.670s 00:34:28.382 user 0m30.747s 00:34:28.382 sys 0m7.320s 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:28.382 ************************************ 00:34:28.382 END TEST nvmf_target_disconnect_tc2 00:34:28.382 ************************************ 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:28.382 rmmod nvme_tcp 00:34:28.382 rmmod nvme_fabrics 00:34:28.382 rmmod nvme_keyring 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 479992 ']' 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 479992 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 479992 ']' 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 479992 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:28.382 16:33:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 479992 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 479992' 00:34:28.382 killing process with pid 479992 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 479992 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 479992 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:28.382 16:33:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.918 16:33:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:30.918 00:34:30.918 real 0m15.499s 00:34:30.918 user 0m56.180s 00:34:30.918 sys 0m9.640s 00:34:30.918 16:33:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:30.918 16:33:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 ************************************ 00:34:30.918 END TEST nvmf_target_disconnect 00:34:30.918 ************************************ 00:34:30.918 16:33:13 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:30.918 16:33:13 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.918 16:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 16:33:13 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:30.918 00:34:30.918 real 27m4.878s 00:34:30.918 user 74m46.728s 00:34:30.918 sys 6m31.037s 00:34:30.918 16:33:13 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:30.918 16:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 ************************************ 00:34:30.918 END TEST nvmf_tcp 00:34:30.918 ************************************ 00:34:30.918 16:33:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:30.918 16:33:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:30.918 16:33:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:30.918 16:33:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:30.918 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 ************************************ 00:34:30.918 START TEST spdkcli_nvmf_tcp 00:34:30.918 ************************************ 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:30.918 * Looking for test storage... 00:34:30.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=481696 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 481696 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 481696 ']' 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 [2024-07-15 16:33:13.493165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:30.918 [2024-07-15 16:33:13.493261] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481696 ] 00:34:30.918 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.918 [2024-07-15 16:33:13.568171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:30.918 [2024-07-15 16:33:13.663463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.918 [2024-07-15 16:33:13.663470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.918 16:33:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:30.918 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:30.918 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:30.918 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:30.918 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:30.918 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:30.918 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:30.918 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:30.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:30.919 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:30.919 ' 00:34:33.452 [2024-07-15 16:33:16.313036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.831 [2024-07-15 16:33:17.553414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:37.369 [2024-07-15 16:33:19.828417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:39.277 [2024-07-15 16:33:21.790587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:40.650 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:40.650 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:40.650 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:40.650 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:40.650 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:40.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:40.650 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:40.650 16:33:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:40.908 16:33:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:40.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:40.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:40.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:40.909 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:40.909 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:40.909 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:40.909 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:40.909 ' 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:46.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:46.180 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:46.180 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:46.180 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:46.180 16:33:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:46.180 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:46.180 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 481696 ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 481696' 00:34:46.437 killing process with pid 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 481696 ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 481696 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 481696 ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 481696 00:34:46.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (481696) - No such process 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 481696 is not found' 00:34:46.437 Process with pid 481696 is not found 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:46.437 00:34:46.437 real 0m16.031s 00:34:46.437 user 0m33.870s 00:34:46.437 sys 0m0.801s 00:34:46.437 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:46.438 16:33:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.438 ************************************ 00:34:46.438 END TEST spdkcli_nvmf_tcp 00:34:46.438 ************************************ 00:34:46.695 16:33:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:46.695 16:33:29 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:46.695 16:33:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:46.695 16:33:29 -- common/autotest_common.sh@10 -- # set +x 00:34:46.695 ************************************ 00:34:46.695 START TEST nvmf_identify_passthru 00:34:46.695 ************************************ 00:34:46.695 16:33:29 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:46.695 * Looking for test storage... 00:34:46.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:46.695 16:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.695 16:33:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.695 16:33:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.695 16:33:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.695 16:33:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.695 16:33:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.695 16:33:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.695 16:33:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:46.695 16:33:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:46.695 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:46.696 16:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:46.696 16:33:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.696 16:33:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.696 16:33:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.696 16:33:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.696 16:33:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.696 16:33:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.696 16:33:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:46.696 16:33:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.696 16:33:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.696 16:33:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.696 16:33:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:46.696 16:33:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:46.696 16:33:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:48.598 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:48.598 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:48.598 Found net devices under 0000:84:00.0: cvl_0_0 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:48.598 Found net devices under 0000:84:00.1: cvl_0_1 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:48.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:34:48.598 00:34:48.598 --- 10.0.0.2 ping statistics --- 00:34:48.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.598 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:34:48.598 00:34:48.598 --- 10.0.0.1 ping statistics --- 00:34:48.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.598 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:34:48.598 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:48.599 16:33:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:48.599 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:48.599 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:48.599 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:48.858 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:48.858 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:82:00.0 00:34:48.858 16:33:31 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:82:00.0 00:34:48.858 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:34:48.858 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:34:48.858 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:34:48.858 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:48.858 16:33:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:48.858 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.040 16:33:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:34:53.040 16:33:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:53.040 16:33:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:34:53.040 16:33:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:53.040 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.224 16:33:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:57.224 16:33:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:57.224 16:33:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:57.224 16:33:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.224 16:33:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:57.224 16:33:39 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:57.224 16:33:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.224 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=486210 00:34:57.224 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:57.224 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:57.224 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 486210 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 486210 ']' 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:57.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:57.224 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.224 [2024-07-15 16:33:40.056032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:57.224 [2024-07-15 16:33:40.056146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:57.224 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.224 [2024-07-15 16:33:40.126360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:57.482 [2024-07-15 16:33:40.214538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:57.482 [2024-07-15 16:33:40.214587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:57.482 [2024-07-15 16:33:40.214608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:57.482 [2024-07-15 16:33:40.214618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:57.482 [2024-07-15 16:33:40.214629] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:57.482 [2024-07-15 16:33:40.214679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.482 [2024-07-15 16:33:40.214702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.482 [2024-07-15 16:33:40.214827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:57.482 [2024-07-15 16:33:40.214831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:57.482 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.482 INFO: Log level set to 20 00:34:57.482 INFO: Requests: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "method": "nvmf_set_config", 00:34:57.482 "id": 1, 00:34:57.482 "params": { 00:34:57.482 "admin_cmd_passthru": { 00:34:57.482 "identify_ctrlr": true 00:34:57.482 } 00:34:57.482 } 00:34:57.482 } 00:34:57.482 00:34:57.482 INFO: response: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "id": 1, 00:34:57.482 "result": true 00:34:57.482 } 00:34:57.482 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.482 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.482 INFO: Setting log level to 20 00:34:57.482 INFO: Setting log level to 20 00:34:57.482 INFO: Log level set to 20 00:34:57.482 INFO: Log level set to 20 00:34:57.482 INFO: Requests: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "method": "framework_start_init", 00:34:57.482 "id": 1 00:34:57.482 } 00:34:57.482 00:34:57.482 INFO: Requests: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "method": "framework_start_init", 00:34:57.482 "id": 1 00:34:57.482 } 00:34:57.482 00:34:57.482 [2024-07-15 16:33:40.363938] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:57.482 INFO: response: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "id": 1, 00:34:57.482 "result": true 00:34:57.482 } 00:34:57.482 00:34:57.482 INFO: response: 00:34:57.482 { 00:34:57.482 "jsonrpc": "2.0", 00:34:57.482 "id": 1, 00:34:57.482 "result": true 00:34:57.482 } 00:34:57.482 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.482 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.482 INFO: Setting log level to 40 00:34:57.482 INFO: Setting log level to 40 00:34:57.482 INFO: Setting log level to 40 00:34:57.482 [2024-07-15 16:33:40.373881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:57.482 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.482 16:33:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:57.482 16:33:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 Nvme0n1 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 [2024-07-15 16:33:43.257596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 [ 00:35:00.760 { 00:35:00.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:00.760 "subtype": "Discovery", 00:35:00.760 "listen_addresses": [], 00:35:00.760 "allow_any_host": true, 00:35:00.760 "hosts": [] 00:35:00.760 }, 00:35:00.760 { 00:35:00.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:00.760 "subtype": "NVMe", 00:35:00.760 "listen_addresses": [ 00:35:00.760 { 00:35:00.760 "trtype": "TCP", 00:35:00.760 "adrfam": "IPv4", 00:35:00.760 "traddr": "10.0.0.2", 00:35:00.760 "trsvcid": "4420" 00:35:00.760 } 00:35:00.760 ], 00:35:00.760 "allow_any_host": true, 00:35:00.760 "hosts": [], 00:35:00.760 "serial_number": "SPDK00000000000001", 00:35:00.760 "model_number": "SPDK bdev Controller", 00:35:00.760 "max_namespaces": 1, 00:35:00.760 "min_cntlid": 1, 00:35:00.760 "max_cntlid": 65519, 00:35:00.760 "namespaces": [ 00:35:00.760 { 00:35:00.760 "nsid": 1, 00:35:00.760 "bdev_name": "Nvme0n1", 00:35:00.760 "name": "Nvme0n1", 00:35:00.760 "nguid": "15831CE1ACA74C009BBA3AF402C76912", 00:35:00.760 "uuid": "15831ce1-aca7-4c00-9bba-3af402c76912" 00:35:00.760 } 00:35:00.760 ] 00:35:00.760 } 00:35:00.760 ] 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:00.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:00.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:00.760 16:33:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:00.760 rmmod nvme_tcp 00:35:00.760 rmmod nvme_fabrics 00:35:00.760 rmmod nvme_keyring 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 486210 ']' 00:35:00.760 16:33:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 486210 00:35:00.760 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 486210 ']' 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 486210 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 486210 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 486210' 00:35:00.761 killing process with pid 486210 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 486210 00:35:00.761 16:33:43 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 486210 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:02.659 16:33:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.659 16:33:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.659 16:33:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.563 16:33:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:04.563 00:35:04.563 real 0m17.825s 00:35:04.563 user 0m26.507s 00:35:04.563 sys 0m2.240s 00:35:04.563 16:33:47 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:04.563 16:33:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.563 ************************************ 00:35:04.563 END TEST nvmf_identify_passthru 00:35:04.563 ************************************ 00:35:04.563 16:33:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.563 16:33:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:04.563 16:33:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:04.563 16:33:47 -- common/autotest_common.sh@10 -- # set +x 00:35:04.563 ************************************ 00:35:04.563 START TEST nvmf_dif 00:35:04.563 ************************************ 00:35:04.563 16:33:47 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:04.563 * Looking for test storage... 00:35:04.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:04.563 16:33:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.563 16:33:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:04.563 16:33:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.563 16:33:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.563 16:33:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.564 16:33:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.564 16:33:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.564 16:33:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.564 16:33:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.564 16:33:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.564 16:33:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.564 16:33:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:04.564 16:33:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:04.564 16:33:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:04.564 16:33:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:04.564 16:33:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:04.564 16:33:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:04.564 16:33:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.564 16:33:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.564 16:33:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:04.564 16:33:47 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:04.564 16:33:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:06.463 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:06.463 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.463 16:33:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:06.464 Found net devices under 0000:84:00.0: cvl_0_0 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:06.464 Found net devices under 0000:84:00.1: cvl_0_1 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:06.464 16:33:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:06.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:35:06.721 00:35:06.721 --- 10.0.0.2 ping statistics --- 00:35:06.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.721 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:35:06.721 00:35:06.721 --- 10.0.0.1 ping statistics --- 00:35:06.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.721 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:06.721 16:33:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:07.654 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:07.655 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:07.655 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:07.655 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:07.655 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:07.655 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:07.655 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:07.655 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:07.655 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:07.655 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:07.655 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:07.655 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:07.655 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:07.655 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:07.655 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:07.655 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:07.655 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:07.912 16:33:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:07.912 16:33:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=489370 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:07.912 16:33:50 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 489370 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 489370 ']' 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:07.912 16:33:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:07.912 [2024-07-15 16:33:50.845616] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:07.912 [2024-07-15 16:33:50.845705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.912 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.170 [2024-07-15 16:33:50.912973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.170 [2024-07-15 16:33:51.001856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.170 [2024-07-15 16:33:51.001913] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.170 [2024-07-15 16:33:51.001927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.170 [2024-07-15 16:33:51.001938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.170 [2024-07-15 16:33:51.001948] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.170 [2024-07-15 16:33:51.001974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:08.170 16:33:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.170 16:33:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.170 16:33:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:08.170 16:33:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.170 [2024-07-15 16:33:51.133532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.170 16:33:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:08.170 16:33:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.428 ************************************ 00:35:08.428 START TEST fio_dif_1_default 00:35:08.428 ************************************ 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.428 bdev_null0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.428 [2024-07-15 16:33:51.189830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.428 { 00:35:08.428 "params": { 00:35:08.428 "name": "Nvme$subsystem", 00:35:08.428 "trtype": "$TEST_TRANSPORT", 00:35:08.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.428 "adrfam": "ipv4", 00:35:08.428 "trsvcid": "$NVMF_PORT", 00:35:08.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.428 "hdgst": ${hdgst:-false}, 00:35:08.428 "ddgst": ${ddgst:-false} 00:35:08.428 }, 00:35:08.428 "method": "bdev_nvme_attach_controller" 00:35:08.428 } 00:35:08.428 EOF 00:35:08.428 )") 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:08.428 "params": { 00:35:08.428 "name": "Nvme0", 00:35:08.428 "trtype": "tcp", 00:35:08.428 "traddr": "10.0.0.2", 00:35:08.428 "adrfam": "ipv4", 00:35:08.428 "trsvcid": "4420", 00:35:08.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.428 "hdgst": false, 00:35:08.428 "ddgst": false 00:35:08.428 }, 00:35:08.428 "method": "bdev_nvme_attach_controller" 00:35:08.428 }' 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:08.428 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:08.429 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:08.429 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.429 16:33:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.686 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.686 fio-3.35 00:35:08.686 Starting 1 thread 00:35:08.686 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.953 00:35:20.953 filename0: (groupid=0, jobs=1): err= 0: pid=489598: Mon Jul 15 16:34:02 2024 00:35:20.953 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10007msec) 00:35:20.953 slat (nsec): min=4852, max=40960, avg=9480.52, stdev=4397.52 00:35:20.953 clat (usec): min=553, max=42381, avg=20948.33, stdev=20318.08 00:35:20.953 lat (usec): min=560, max=42392, avg=20957.81, stdev=20317.62 00:35:20.953 clat percentiles (usec): 00:35:20.953 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 619], 00:35:20.953 | 30.00th=[ 652], 40.00th=[ 725], 50.00th=[ 4686], 60.00th=[41157], 00:35:20.953 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:20.953 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:20.953 | 99.99th=[42206] 00:35:20.953 bw ( KiB/s): min= 704, max= 768, per=99.78%, avg=761.60, stdev=19.70, samples=20 00:35:20.953 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:35:20.953 lat (usec) : 750=41.40%, 1000=8.28% 00:35:20.953 lat (msec) : 2=0.21%, 10=0.21%, 50=49.90% 00:35:20.953 cpu : usr=90.31%, sys=9.43%, ctx=18, majf=0, minf=239 00:35:20.953 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.953 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.953 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:20.953 00:35:20.953 Run status group 0 (all jobs): 00:35:20.953 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10007-10007msec 00:35:20.953 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:20.953 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 00:35:20.954 real 0m11.091s 00:35:20.954 user 0m10.151s 00:35:20.954 sys 0m1.185s 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 ************************************ 00:35:20.954 END TEST fio_dif_1_default 00:35:20.954 ************************************ 00:35:20.954 16:34:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:20.954 16:34:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:20.954 16:34:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 ************************************ 00:35:20.954 START TEST fio_dif_1_multi_subsystems 00:35:20.954 ************************************ 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 bdev_null0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 [2024-07-15 16:34:02.324553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 bdev_null1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.954 { 00:35:20.954 "params": { 00:35:20.954 "name": "Nvme$subsystem", 00:35:20.954 "trtype": "$TEST_TRANSPORT", 00:35:20.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.954 "adrfam": "ipv4", 00:35:20.954 "trsvcid": "$NVMF_PORT", 00:35:20.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.954 "hdgst": ${hdgst:-false}, 00:35:20.954 "ddgst": ${ddgst:-false} 00:35:20.954 }, 00:35:20.954 "method": "bdev_nvme_attach_controller" 00:35:20.954 } 00:35:20.954 EOF 00:35:20.954 )") 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:20.954 { 00:35:20.954 "params": { 00:35:20.954 "name": "Nvme$subsystem", 00:35:20.954 "trtype": "$TEST_TRANSPORT", 00:35:20.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:20.954 "adrfam": "ipv4", 00:35:20.954 "trsvcid": "$NVMF_PORT", 00:35:20.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:20.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:20.954 "hdgst": ${hdgst:-false}, 00:35:20.954 "ddgst": ${ddgst:-false} 00:35:20.954 }, 00:35:20.954 "method": "bdev_nvme_attach_controller" 00:35:20.954 } 00:35:20.954 EOF 00:35:20.954 )") 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:20.954 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:20.955 "params": { 00:35:20.955 "name": "Nvme0", 00:35:20.955 "trtype": "tcp", 00:35:20.955 "traddr": "10.0.0.2", 00:35:20.955 "adrfam": "ipv4", 00:35:20.955 "trsvcid": "4420", 00:35:20.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.955 "hdgst": false, 00:35:20.955 "ddgst": false 00:35:20.955 }, 00:35:20.955 "method": "bdev_nvme_attach_controller" 00:35:20.955 },{ 00:35:20.955 "params": { 00:35:20.955 "name": "Nvme1", 00:35:20.955 "trtype": "tcp", 00:35:20.955 "traddr": "10.0.0.2", 00:35:20.955 "adrfam": "ipv4", 00:35:20.955 "trsvcid": "4420", 00:35:20.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:20.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:20.955 "hdgst": false, 00:35:20.955 "ddgst": false 00:35:20.955 }, 00:35:20.955 "method": "bdev_nvme_attach_controller" 00:35:20.955 }' 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:20.955 16:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.955 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.955 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.955 fio-3.35 00:35:20.955 Starting 2 threads 00:35:20.955 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.920 00:35:30.920 filename0: (groupid=0, jobs=1): err= 0: pid=491011: Mon Jul 15 16:34:13 2024 00:35:30.920 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10022msec) 00:35:30.920 slat (nsec): min=6852, max=71719, avg=10210.68, stdev=3880.74 00:35:30.920 clat (usec): min=40863, max=43329, avg=41898.69, stdev=308.96 00:35:30.920 lat (usec): min=40872, max=43343, avg=41908.90, stdev=309.29 00:35:30.920 clat percentiles (usec): 00:35:30.920 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:30.920 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:30.920 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:30.920 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:35:30.920 | 99.99th=[43254] 00:35:30.920 bw ( KiB/s): min= 352, max= 384, per=33.62%, avg=380.80, stdev= 9.85, samples=20 00:35:30.920 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:30.920 lat (msec) : 50=100.00% 00:35:30.920 cpu : usr=94.91%, sys=4.77%, ctx=20, majf=0, minf=171 00:35:30.920 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.920 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.920 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:30.920 filename1: (groupid=0, jobs=1): err= 0: pid=491012: Mon Jul 15 16:34:13 2024 00:35:30.920 read: IOPS=187, BW=749KiB/s (767kB/s)(7520KiB/10035msec) 00:35:30.920 slat (nsec): min=7802, max=71951, avg=10041.11, stdev=3560.76 00:35:30.920 clat (usec): min=557, max=42884, avg=21318.86, stdev=20527.39 00:35:30.920 lat (usec): min=565, max=42897, avg=21328.90, stdev=20527.52 00:35:30.920 clat percentiles (usec): 00:35:30.920 | 1.00th=[ 611], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 685], 00:35:30.920 | 30.00th=[ 709], 40.00th=[ 766], 50.00th=[40633], 60.00th=[41157], 00:35:30.920 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:30.920 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:30.920 | 99.99th=[42730] 00:35:30.920 bw ( KiB/s): min= 672, max= 768, per=66.35%, avg=750.40, stdev=31.96, samples=20 00:35:30.920 iops : min= 168, max= 192, avg=187.60, stdev= 7.99, samples=20 00:35:30.920 lat (usec) : 750=37.98%, 1000=11.33% 00:35:30.920 lat (msec) : 2=0.48%, 50=50.21% 00:35:30.920 cpu : usr=94.60%, sys=5.04%, ctx=15, majf=0, minf=129 00:35:30.920 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.920 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.920 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:30.920 00:35:30.920 Run status group 0 (all jobs): 00:35:30.920 READ: bw=1130KiB/s (1158kB/s), 382KiB/s-749KiB/s (391kB/s-767kB/s), io=11.1MiB (11.6MB), run=10022-10035msec 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.920 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 00:35:30.921 real 0m11.468s 00:35:30.921 user 0m20.466s 00:35:30.921 sys 0m1.255s 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 ************************************ 00:35:30.921 END TEST fio_dif_1_multi_subsystems 00:35:30.921 ************************************ 00:35:30.921 16:34:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:30.921 16:34:13 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:30.921 16:34:13 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 ************************************ 00:35:30.921 START TEST fio_dif_rand_params 00:35:30.921 ************************************ 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 bdev_null0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:30.921 [2024-07-15 16:34:13.835589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.921 { 00:35:30.921 "params": { 00:35:30.921 "name": "Nvme$subsystem", 00:35:30.921 "trtype": "$TEST_TRANSPORT", 00:35:30.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.921 "adrfam": "ipv4", 00:35:30.921 "trsvcid": "$NVMF_PORT", 00:35:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.921 "hdgst": ${hdgst:-false}, 00:35:30.921 "ddgst": ${ddgst:-false} 00:35:30.921 }, 00:35:30.921 "method": "bdev_nvme_attach_controller" 00:35:30.921 } 00:35:30.921 EOF 00:35:30.921 )") 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.921 "params": { 00:35:30.921 "name": "Nvme0", 00:35:30.921 "trtype": "tcp", 00:35:30.921 "traddr": "10.0.0.2", 00:35:30.921 "adrfam": "ipv4", 00:35:30.921 "trsvcid": "4420", 00:35:30.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.921 "hdgst": false, 00:35:30.921 "ddgst": false 00:35:30.921 }, 00:35:30.921 "method": "bdev_nvme_attach_controller" 00:35:30.921 }' 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:30.921 16:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:31.178 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:31.178 ... 00:35:31.178 fio-3.35 00:35:31.178 Starting 3 threads 00:35:31.178 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.733 00:35:37.733 filename0: (groupid=0, jobs=1): err= 0: pid=492414: Mon Jul 15 16:34:19 2024 00:35:37.733 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5005msec) 00:35:37.733 slat (nsec): min=5107, max=44273, avg=13800.16, stdev=3676.35 00:35:37.733 clat (usec): min=3913, max=88319, avg=13975.34, stdev=11663.66 00:35:37.733 lat (usec): min=3925, max=88334, avg=13989.14, stdev=11663.66 00:35:37.733 clat percentiles (usec): 00:35:37.733 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 7898], 20.00th=[ 8717], 00:35:37.733 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[11469], 00:35:37.733 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[50070], 00:35:37.733 | 99.00th=[53740], 99.50th=[55313], 99.90th=[57410], 99.95th=[88605], 00:35:37.733 | 99.99th=[88605] 00:35:37.733 bw ( KiB/s): min=19456, max=36096, per=32.83%, avg=27392.00, stdev=6426.68, samples=10 00:35:37.734 iops : min= 152, max= 282, avg=214.00, stdev=50.21, samples=10 00:35:37.734 lat (msec) : 4=0.09%, 10=31.69%, 20=59.65%, 50=2.98%, 100=5.59% 00:35:37.734 cpu : usr=88.45%, sys=10.93%, ctx=16, majf=0, minf=56 00:35:37.734 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 issued rwts: total=1073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.734 filename0: (groupid=0, jobs=1): err= 0: pid=492415: Mon Jul 15 16:34:19 2024 00:35:37.734 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(136MiB/5004msec) 00:35:37.734 slat (usec): min=4, max=106, avg=13.67, stdev= 4.31 00:35:37.734 clat (usec): min=4640, max=92167, avg=13739.21, stdev=10001.85 00:35:37.734 lat (usec): min=4653, max=92179, avg=13752.89, stdev=10001.88 00:35:37.734 clat percentiles (usec): 00:35:37.734 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 6521], 20.00th=[ 8455], 00:35:37.734 | 30.00th=[ 9110], 40.00th=[10683], 50.00th=[11994], 60.00th=[13042], 00:35:37.734 | 70.00th=[14484], 80.00th=[15664], 90.00th=[16712], 95.00th=[47449], 00:35:37.734 | 99.00th=[53216], 99.50th=[55837], 99.90th=[91751], 99.95th=[91751], 00:35:37.734 | 99.99th=[91751] 00:35:37.734 bw ( KiB/s): min=23808, max=34816, per=33.39%, avg=27858.20, stdev=3225.17, samples=10 00:35:37.734 iops : min= 186, max= 272, avg=217.60, stdev=25.21, samples=10 00:35:37.734 lat (msec) : 10=36.85%, 20=57.56%, 50=2.66%, 100=2.93% 00:35:37.734 cpu : usr=88.51%, sys=10.99%, ctx=10, majf=0, minf=129 00:35:37.734 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 issued rwts: total=1091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.734 filename0: (groupid=0, jobs=1): err= 0: pid=492416: Mon Jul 15 16:34:19 2024 00:35:37.734 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5004msec) 00:35:37.734 slat (nsec): min=4899, max=36108, avg=13513.60, stdev=3350.63 00:35:37.734 clat (usec): min=4667, max=89809, avg=13652.73, stdev=10991.89 00:35:37.734 lat (usec): min=4679, max=89821, avg=13666.24, stdev=10991.77 00:35:37.734 clat percentiles (usec): 00:35:37.734 | 1.00th=[ 5080], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 8586], 00:35:37.734 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:35:37.734 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[49021], 00:35:37.734 | 99.00th=[53740], 99.50th=[54789], 99.90th=[87557], 99.95th=[89654], 00:35:37.734 | 99.99th=[89654] 00:35:37.734 bw ( KiB/s): min=20224, max=35072, per=33.60%, avg=28032.00, stdev=5438.95, samples=10 00:35:37.734 iops : min= 158, max= 274, avg=219.00, stdev=42.49, samples=10 00:35:37.734 lat (msec) : 10=32.79%, 20=60.02%, 50=3.19%, 100=4.01% 00:35:37.734 cpu : usr=89.39%, sys=9.99%, ctx=22, majf=0, minf=91 00:35:37.734 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.734 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:37.734 00:35:37.734 Run status group 0 (all jobs): 00:35:37.734 READ: bw=81.5MiB/s (85.4MB/s), 26.8MiB/s-27.4MiB/s (28.1MB/s-28.8MB/s), io=408MiB (428MB), run=5004-5005msec 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 bdev_null0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 [2024-07-15 16:34:19.918674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 bdev_null1 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 bdev_null2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.734 { 00:35:37.734 "params": { 00:35:37.734 "name": "Nvme$subsystem", 00:35:37.734 "trtype": "$TEST_TRANSPORT", 00:35:37.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.734 "adrfam": "ipv4", 00:35:37.734 "trsvcid": "$NVMF_PORT", 00:35:37.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.734 "hdgst": ${hdgst:-false}, 00:35:37.734 "ddgst": ${ddgst:-false} 00:35:37.734 }, 00:35:37.734 "method": "bdev_nvme_attach_controller" 00:35:37.734 } 00:35:37.734 EOF 00:35:37.734 )") 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.734 { 00:35:37.734 "params": { 00:35:37.734 "name": "Nvme$subsystem", 00:35:37.734 "trtype": "$TEST_TRANSPORT", 00:35:37.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.734 "adrfam": "ipv4", 00:35:37.734 "trsvcid": "$NVMF_PORT", 00:35:37.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.734 "hdgst": ${hdgst:-false}, 00:35:37.734 "ddgst": ${ddgst:-false} 00:35:37.734 }, 00:35:37.734 "method": "bdev_nvme_attach_controller" 00:35:37.734 } 00:35:37.734 EOF 00:35:37.734 )") 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:37.734 { 00:35:37.734 "params": { 00:35:37.734 "name": "Nvme$subsystem", 00:35:37.734 "trtype": "$TEST_TRANSPORT", 00:35:37.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.734 "adrfam": "ipv4", 00:35:37.734 "trsvcid": "$NVMF_PORT", 00:35:37.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.734 "hdgst": ${hdgst:-false}, 00:35:37.734 "ddgst": ${ddgst:-false} 00:35:37.734 }, 00:35:37.734 "method": "bdev_nvme_attach_controller" 00:35:37.734 } 00:35:37.734 EOF 00:35:37.734 )") 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:37.734 16:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:37.734 16:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:37.734 16:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:37.734 "params": { 00:35:37.734 "name": "Nvme0", 00:35:37.734 "trtype": "tcp", 00:35:37.734 "traddr": "10.0.0.2", 00:35:37.734 "adrfam": "ipv4", 00:35:37.734 "trsvcid": "4420", 00:35:37.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.734 "hdgst": false, 00:35:37.734 "ddgst": false 00:35:37.734 }, 00:35:37.734 "method": "bdev_nvme_attach_controller" 00:35:37.734 },{ 00:35:37.734 "params": { 00:35:37.734 "name": "Nvme1", 00:35:37.734 "trtype": "tcp", 00:35:37.734 "traddr": "10.0.0.2", 00:35:37.734 "adrfam": "ipv4", 00:35:37.734 "trsvcid": "4420", 00:35:37.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.734 "hdgst": false, 00:35:37.734 "ddgst": false 00:35:37.734 }, 00:35:37.734 "method": "bdev_nvme_attach_controller" 00:35:37.735 },{ 00:35:37.735 "params": { 00:35:37.735 "name": "Nvme2", 00:35:37.735 "trtype": "tcp", 00:35:37.735 "traddr": "10.0.0.2", 00:35:37.735 "adrfam": "ipv4", 00:35:37.735 "trsvcid": "4420", 00:35:37.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:37.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:37.735 "hdgst": false, 00:35:37.735 "ddgst": false 00:35:37.735 }, 00:35:37.735 "method": "bdev_nvme_attach_controller" 00:35:37.735 }' 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:37.735 16:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.735 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.735 ... 00:35:37.735 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.735 ... 00:35:37.735 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:37.735 ... 00:35:37.735 fio-3.35 00:35:37.735 Starting 24 threads 00:35:37.735 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.928 00:35:49.928 filename0: (groupid=0, jobs=1): err= 0: pid=493272: Mon Jul 15 16:34:31 2024 00:35:49.928 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10012msec) 00:35:49.928 slat (usec): min=6, max=117, avg=30.66, stdev=25.61 00:35:49.928 clat (usec): min=1544, max=44660, avg=32894.83, stdev=3952.98 00:35:49.928 lat (usec): min=1562, max=44681, avg=32925.49, stdev=3953.36 00:35:49.928 clat percentiles (usec): 00:35:49.928 | 1.00th=[ 5866], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:35:49.928 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.928 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:35:49.928 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:35:49.928 | 99.99th=[44827] 00:35:49.928 bw ( KiB/s): min= 1792, max= 2304, per=4.23%, avg=1926.40, stdev=97.17, samples=20 00:35:49.928 iops : min= 448, max= 576, avg=481.60, stdev=24.29, samples=20 00:35:49.928 lat (msec) : 2=0.66%, 4=0.33%, 10=0.33%, 20=0.62%, 50=98.05% 00:35:49.928 cpu : usr=96.30%, sys=2.28%, ctx=123, majf=0, minf=9 00:35:49.928 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:49.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.928 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.928 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.928 filename0: (groupid=0, jobs=1): err= 0: pid=493273: Mon Jul 15 16:34:31 2024 00:35:49.928 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:35:49.928 slat (usec): min=8, max=120, avg=39.85, stdev=26.43 00:35:49.928 clat (usec): min=18211, max=64809, avg=33385.72, stdev=1601.79 00:35:49.928 lat (usec): min=18248, max=64837, avg=33425.57, stdev=1597.00 00:35:49.928 clat percentiles (usec): 00:35:49.928 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:49.928 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.928 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.928 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:35:49.928 | 99.99th=[64750] 00:35:49.928 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1894.40, stdev=78.80, samples=20 00:35:49.929 iops : min= 416, max= 512, avg=473.60, stdev=19.70, samples=20 00:35:49.929 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:35:49.929 cpu : usr=96.46%, sys=2.19%, ctx=214, majf=0, minf=9 00:35:49.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493274: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.5MiB/10004msec) 00:35:49.929 slat (usec): min=8, max=120, avg=60.84, stdev=24.60 00:35:49.929 clat (usec): min=23057, max=65561, avg=33214.66, stdev=2505.60 00:35:49.929 lat (usec): min=23065, max=65585, avg=33275.50, stdev=2501.10 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[26870], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:35:49.929 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:49.929 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.929 | 99.00th=[41681], 99.50th=[44827], 99.90th=[65274], 99.95th=[65274], 00:35:49.929 | 99.99th=[65799] 00:35:49.929 bw ( KiB/s): min= 1536, max= 1968, per=4.16%, avg=1895.58, stdev=92.74, samples=19 00:35:49.929 iops : min= 384, max= 492, avg=473.89, stdev=23.18, samples=19 00:35:49.929 lat (msec) : 50=99.62%, 100=0.38% 00:35:49.929 cpu : usr=93.26%, sys=3.67%, ctx=386, majf=0, minf=9 00:35:49.929 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493275: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.5MiB/10010msec) 00:35:49.929 slat (usec): min=18, max=121, avg=77.22, stdev=12.52 00:35:49.929 clat (usec): min=9930, max=65424, avg=33030.64, stdev=2540.48 00:35:49.929 lat (usec): min=9954, max=65452, avg=33107.85, stdev=2537.43 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:49.929 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:49.929 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.929 | 99.00th=[39584], 99.50th=[41681], 99.90th=[65274], 99.95th=[65274], 00:35:49.929 | 99.99th=[65274] 00:35:49.929 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1893.05, stdev=91.30, samples=19 00:35:49.929 iops : min= 384, max= 480, avg=473.26, stdev=22.83, samples=19 00:35:49.929 lat (msec) : 10=0.06%, 20=0.19%, 50=99.41%, 100=0.34% 00:35:49.929 cpu : usr=97.25%, sys=1.76%, ctx=202, majf=0, minf=9 00:35:49.929 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493276: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=473, BW=1894KiB/s (1940kB/s)(18.5MiB/10001msec) 00:35:49.929 slat (usec): min=13, max=118, avg=46.06, stdev=19.08 00:35:49.929 clat (usec): min=26430, max=68927, avg=33369.93, stdev=2080.59 00:35:49.929 lat (usec): min=26449, max=68956, avg=33415.99, stdev=2078.03 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:49.929 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.929 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.929 | 99.00th=[38536], 99.50th=[44303], 99.90th=[61604], 99.95th=[61604], 00:35:49.929 | 99.99th=[68682] 00:35:49.929 bw ( KiB/s): min= 1536, max= 2048, per=4.16%, avg=1893.05, stdev=100.78, samples=19 00:35:49.929 iops : min= 384, max= 512, avg=473.26, stdev=25.19, samples=19 00:35:49.929 lat (msec) : 50=99.66%, 100=0.34% 00:35:49.929 cpu : usr=92.16%, sys=4.20%, ctx=1077, majf=0, minf=9 00:35:49.929 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493277: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:35:49.929 slat (nsec): min=8004, max=94584, avg=32296.24, stdev=12858.68 00:35:49.929 clat (usec): min=12085, max=44551, avg=33294.37, stdev=1885.84 00:35:49.929 lat (usec): min=12093, max=44584, avg=33326.67, stdev=1886.25 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[30016], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.929 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.929 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:35:49.929 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:35:49.929 | 99.99th=[44303] 00:35:49.929 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1906.53, stdev=40.36, samples=19 00:35:49.929 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:35:49.929 lat (msec) : 20=0.67%, 50=99.33% 00:35:49.929 cpu : usr=92.28%, sys=4.36%, ctx=365, majf=0, minf=9 00:35:49.929 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493278: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10045msec) 00:35:49.929 slat (nsec): min=7968, max=82075, avg=21605.93, stdev=14308.48 00:35:49.929 clat (usec): min=9783, max=92793, avg=33045.38, stdev=4558.27 00:35:49.929 lat (usec): min=9792, max=92828, avg=33066.98, stdev=4560.08 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[19268], 5.00th=[23725], 10.00th=[30540], 20.00th=[32900], 00:35:49.929 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.929 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:35:49.929 | 99.00th=[45876], 99.50th=[49021], 99.90th=[79168], 99.95th=[79168], 00:35:49.929 | 99.99th=[92799] 00:35:49.929 bw ( KiB/s): min= 1507, max= 2160, per=4.20%, avg=1913.42, stdev=117.64, samples=19 00:35:49.929 iops : min= 376, max= 540, avg=478.32, stdev=29.55, samples=19 00:35:49.929 lat (msec) : 10=0.12%, 20=1.69%, 50=97.73%, 100=0.45% 00:35:49.929 cpu : usr=97.89%, sys=1.65%, ctx=35, majf=0, minf=9 00:35:49.929 IO depths : 1=0.1%, 2=0.1%, 4=1.1%, 8=81.0%, 16=17.8%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=89.4%, 8=9.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename0: (groupid=0, jobs=1): err= 0: pid=493279: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10006msec) 00:35:49.929 slat (nsec): min=8273, max=87927, avg=22946.68, stdev=15807.51 00:35:49.929 clat (usec): min=9297, max=88301, avg=33474.79, stdev=3429.88 00:35:49.929 lat (usec): min=9306, max=88342, avg=33497.74, stdev=3431.44 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[26870], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:49.929 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.929 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.929 | 99.00th=[36963], 99.50th=[44303], 99.90th=[79168], 99.95th=[79168], 00:35:49.929 | 99.99th=[88605] 00:35:49.929 bw ( KiB/s): min= 1539, max= 2048, per=4.14%, avg=1886.47, stdev=102.56, samples=19 00:35:49.929 iops : min= 384, max= 512, avg=471.58, stdev=25.78, samples=19 00:35:49.929 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:35:49.929 cpu : usr=98.01%, sys=1.57%, ctx=31, majf=0, minf=9 00:35:49.929 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename1: (groupid=0, jobs=1): err= 0: pid=493280: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:35:49.929 slat (nsec): min=7472, max=72969, avg=30090.91, stdev=10846.44 00:35:49.929 clat (usec): min=9495, max=44571, avg=33229.22, stdev=2439.93 00:35:49.929 lat (usec): min=9511, max=44595, avg=33259.31, stdev=2439.35 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[16909], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.929 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.929 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[36439], 00:35:49.929 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:35:49.929 | 99.99th=[44827] 00:35:49.929 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1907.20, stdev=39.40, samples=20 00:35:49.929 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:35:49.929 lat (msec) : 10=0.29%, 20=0.71%, 50=99.00% 00:35:49.929 cpu : usr=96.17%, sys=2.49%, ctx=46, majf=0, minf=9 00:35:49.929 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:49.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.929 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.929 filename1: (groupid=0, jobs=1): err= 0: pid=493281: Mon Jul 15 16:34:31 2024 00:35:49.929 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:35:49.929 slat (usec): min=8, max=104, avg=30.53, stdev=18.43 00:35:49.929 clat (usec): min=10016, max=79952, avg=33401.82, stdev=2550.31 00:35:49.929 lat (usec): min=10028, max=79990, avg=33432.35, stdev=2551.01 00:35:49.929 clat percentiles (usec): 00:35:49.929 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:49.929 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.929 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.929 | 99.00th=[40633], 99.50th=[42206], 99.90th=[61604], 99.95th=[61604], 00:35:49.929 | 99.99th=[80217] 00:35:49.929 bw ( KiB/s): min= 1536, max= 2048, per=4.16%, avg=1893.05, stdev=100.78, samples=19 00:35:49.929 iops : min= 384, max= 512, avg=473.26, stdev=25.19, samples=19 00:35:49.930 lat (msec) : 20=0.38%, 50=99.28%, 100=0.34% 00:35:49.930 cpu : usr=98.38%, sys=1.20%, ctx=22, majf=0, minf=9 00:35:49.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493282: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:35:49.930 slat (nsec): min=7448, max=83862, avg=34896.93, stdev=11993.50 00:35:49.930 clat (usec): min=26396, max=65088, avg=33465.24, stdev=2179.09 00:35:49.930 lat (usec): min=26425, max=65110, avg=33500.14, stdev=2178.29 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.930 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.930 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.930 | 99.00th=[38536], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:35:49.930 | 99.99th=[65274] 00:35:49.930 bw ( KiB/s): min= 1536, max= 1923, per=4.15%, avg=1887.11, stdev=94.20, samples=19 00:35:49.930 iops : min= 384, max= 480, avg=471.58, stdev=23.47, samples=19 00:35:49.930 lat (msec) : 50=99.66%, 100=0.34% 00:35:49.930 cpu : usr=95.92%, sys=2.51%, ctx=117, majf=0, minf=9 00:35:49.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493283: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=476, BW=1904KiB/s (1950kB/s)(18.7MiB/10031msec) 00:35:49.930 slat (usec): min=7, max=117, avg=49.27, stdev=20.88 00:35:49.930 clat (usec): min=12638, max=44532, avg=33136.24, stdev=1963.91 00:35:49.930 lat (usec): min=12649, max=44552, avg=33185.51, stdev=1962.43 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:49.930 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.930 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.930 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:35:49.930 | 99.99th=[44303] 00:35:49.930 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1907.20, stdev=39.40, samples=20 00:35:49.930 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:35:49.930 lat (msec) : 20=0.67%, 50=99.33% 00:35:49.930 cpu : usr=97.92%, sys=1.62%, ctx=22, majf=0, minf=9 00:35:49.930 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493284: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:35:49.930 slat (usec): min=8, max=140, avg=64.61, stdev=26.79 00:35:49.930 clat (usec): min=13125, max=53354, avg=33006.62, stdev=2052.64 00:35:49.930 lat (usec): min=13139, max=53386, avg=33071.24, stdev=2049.82 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[28705], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:49.930 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:35:49.930 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.930 | 99.00th=[36963], 99.50th=[41157], 99.90th=[42730], 99.95th=[50594], 00:35:49.930 | 99.99th=[53216] 00:35:49.930 bw ( KiB/s): min= 1792, max= 2048, per=4.19%, avg=1906.53, stdev=58.73, samples=19 00:35:49.930 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:35:49.930 lat (msec) : 20=0.76%, 50=99.16%, 100=0.08% 00:35:49.930 cpu : usr=94.91%, sys=2.96%, ctx=210, majf=0, minf=9 00:35:49.930 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493285: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10006msec) 00:35:49.930 slat (usec): min=7, max=148, avg=42.22, stdev=16.99 00:35:49.930 clat (usec): min=10652, max=63565, avg=33297.07, stdev=2242.56 00:35:49.930 lat (usec): min=10660, max=63585, avg=33339.30, stdev=2242.95 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:49.930 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.930 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.930 | 99.00th=[38536], 99.50th=[44303], 99.90th=[56361], 99.95th=[56361], 00:35:49.930 | 99.99th=[63701] 00:35:49.930 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1893.21, stdev=67.96, samples=19 00:35:49.930 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:35:49.930 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:49.930 cpu : usr=95.68%, sys=2.58%, ctx=137, majf=0, minf=9 00:35:49.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493286: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10015msec) 00:35:49.930 slat (usec): min=9, max=204, avg=75.66, stdev=14.41 00:35:49.930 clat (usec): min=21824, max=46625, avg=33044.92, stdev=1451.39 00:35:49.930 lat (usec): min=21872, max=46643, avg=33120.58, stdev=1446.76 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:49.930 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:49.930 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.930 | 99.00th=[38011], 99.50th=[43254], 99.90th=[45351], 99.95th=[46400], 00:35:49.930 | 99.99th=[46400] 00:35:49.930 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1894.40, stdev=66.96, samples=20 00:35:49.930 iops : min= 416, max= 480, avg=473.60, stdev=16.74, samples=20 00:35:49.930 lat (msec) : 50=100.00% 00:35:49.930 cpu : usr=93.24%, sys=3.57%, ctx=316, majf=0, minf=9 00:35:49.930 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename1: (groupid=0, jobs=1): err= 0: pid=493287: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:35:49.930 slat (usec): min=8, max=116, avg=37.98, stdev=22.85 00:35:49.930 clat (usec): min=24591, max=47102, avg=33386.89, stdev=1454.93 00:35:49.930 lat (usec): min=24605, max=47121, avg=33424.87, stdev=1451.48 00:35:49.930 clat percentiles (usec): 00:35:49.930 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:35:49.930 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:35:49.930 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.930 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:35:49.930 | 99.99th=[46924] 00:35:49.930 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1894.40, stdev=78.80, samples=20 00:35:49.930 iops : min= 416, max= 512, avg=473.60, stdev=19.70, samples=20 00:35:49.930 lat (msec) : 50=100.00% 00:35:49.930 cpu : usr=96.44%, sys=2.23%, ctx=93, majf=0, minf=9 00:35:49.930 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.930 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.930 filename2: (groupid=0, jobs=1): err= 0: pid=493288: Mon Jul 15 16:34:31 2024 00:35:49.930 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:35:49.930 slat (nsec): min=8158, max=70351, avg=23067.13, stdev=10285.89 00:35:49.930 clat (usec): min=10270, max=59863, avg=33466.86, stdev=2327.81 00:35:49.931 lat (usec): min=10286, max=59897, avg=33489.93, stdev=2329.32 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:35:49.931 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.931 | 99.00th=[40633], 99.50th=[42206], 99.90th=[59507], 99.95th=[60031], 00:35:49.931 | 99.99th=[60031] 00:35:49.931 bw ( KiB/s): min= 1539, max= 2048, per=4.16%, avg=1893.21, stdev=100.19, samples=19 00:35:49.931 iops : min= 384, max= 512, avg=473.26, stdev=25.19, samples=19 00:35:49.931 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:35:49.931 cpu : usr=97.81%, sys=1.79%, ctx=29, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493289: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10008msec) 00:35:49.931 slat (nsec): min=7885, max=86021, avg=31375.27, stdev=10708.36 00:35:49.931 clat (usec): min=9454, max=44635, avg=33216.57, stdev=2420.71 00:35:49.931 lat (usec): min=9469, max=44656, avg=33247.95, stdev=2419.90 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[16909], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.931 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[36439], 00:35:49.931 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[44827], 00:35:49.931 | 99.99th=[44827] 00:35:49.931 bw ( KiB/s): min= 1792, max= 1928, per=4.19%, avg=1907.20, stdev=39.48, samples=20 00:35:49.931 iops : min= 448, max= 482, avg=476.80, stdev= 9.87, samples=20 00:35:49.931 lat (msec) : 10=0.15%, 20=0.86%, 50=99.00% 00:35:49.931 cpu : usr=97.75%, sys=1.59%, ctx=37, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493290: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10014msec) 00:35:49.931 slat (usec): min=6, max=101, avg=39.74, stdev=12.99 00:35:49.931 clat (usec): min=26389, max=46592, avg=33366.35, stdev=1323.56 00:35:49.931 lat (usec): min=26429, max=46609, avg=33406.08, stdev=1323.93 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.931 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:35:49.931 | 99.00th=[38536], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:35:49.931 | 99.99th=[46400] 00:35:49.931 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1894.40, stdev=66.96, samples=20 00:35:49.931 iops : min= 416, max= 480, avg=473.60, stdev=16.74, samples=20 00:35:49.931 lat (msec) : 50=100.00% 00:35:49.931 cpu : usr=97.72%, sys=1.58%, ctx=34, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493291: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=475, BW=1904KiB/s (1949kB/s)(18.6MiB/10031msec) 00:35:49.931 slat (usec): min=7, max=108, avg=44.28, stdev=17.20 00:35:49.931 clat (usec): min=12256, max=44523, avg=33170.70, stdev=1952.04 00:35:49.931 lat (usec): min=12300, max=44558, avg=33214.98, stdev=1952.38 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[28705], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:49.931 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.931 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35914], 00:35:49.931 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44303], 99.95th=[44303], 00:35:49.931 | 99.99th=[44303] 00:35:49.931 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1907.20, stdev=39.40, samples=20 00:35:49.931 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:35:49.931 lat (msec) : 20=0.67%, 50=99.33% 00:35:49.931 cpu : usr=97.72%, sys=1.86%, ctx=37, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493292: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:35:49.931 slat (nsec): min=13768, max=72733, avg=35187.07, stdev=8805.28 00:35:49.931 clat (usec): min=26430, max=63316, avg=33468.14, stdev=2086.23 00:35:49.931 lat (usec): min=26455, max=63351, avg=33503.33, stdev=2086.02 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.931 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:35:49.931 | 99.00th=[38536], 99.50th=[44303], 99.90th=[63177], 99.95th=[63177], 00:35:49.931 | 99.99th=[63177] 00:35:49.931 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1893.05, stdev=91.30, samples=19 00:35:49.931 iops : min= 384, max= 480, avg=473.26, stdev=22.83, samples=19 00:35:49.931 lat (msec) : 50=99.66%, 100=0.34% 00:35:49.931 cpu : usr=97.94%, sys=1.53%, ctx=31, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493293: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10006msec) 00:35:49.931 slat (usec): min=8, max=108, avg=31.44, stdev=24.16 00:35:49.931 clat (usec): min=6489, max=79252, avg=33273.08, stdev=3732.83 00:35:49.931 lat (usec): min=6497, max=79285, avg=33304.53, stdev=3733.22 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[22414], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:35:49.931 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.931 | 99.00th=[43254], 99.50th=[45876], 99.90th=[79168], 99.95th=[79168], 00:35:49.931 | 99.99th=[79168] 00:35:49.931 bw ( KiB/s): min= 1539, max= 2048, per=4.15%, avg=1889.00, stdev=104.01, samples=19 00:35:49.931 iops : min= 384, max= 512, avg=472.21, stdev=26.14, samples=19 00:35:49.931 lat (msec) : 10=0.34%, 20=0.50%, 50=98.83%, 100=0.34% 00:35:49.931 cpu : usr=97.25%, sys=1.83%, ctx=59, majf=0, minf=9 00:35:49.931 IO depths : 1=5.4%, 2=11.3%, 4=23.6%, 8=52.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493294: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:35:49.931 slat (usec): min=8, max=111, avg=28.09, stdev=23.63 00:35:49.931 clat (usec): min=26678, max=65433, avg=33545.91, stdev=2170.54 00:35:49.931 lat (usec): min=26712, max=65460, avg=33574.00, stdev=2169.63 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:49.931 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:35:49.931 | 99.00th=[36963], 99.50th=[44303], 99.90th=[65274], 99.95th=[65274], 00:35:49.931 | 99.99th=[65274] 00:35:49.931 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1893.05, stdev=91.30, samples=19 00:35:49.931 iops : min= 384, max= 480, avg=473.26, stdev=22.83, samples=19 00:35:49.931 lat (msec) : 50=99.66%, 100=0.34% 00:35:49.931 cpu : usr=97.92%, sys=1.57%, ctx=38, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.931 filename2: (groupid=0, jobs=1): err= 0: pid=493295: Mon Jul 15 16:34:31 2024 00:35:49.931 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:35:49.931 slat (nsec): min=5611, max=97182, avg=37253.74, stdev=11192.86 00:35:49.931 clat (usec): min=26452, max=72158, avg=33488.26, stdev=2530.66 00:35:49.931 lat (usec): min=26496, max=72174, avg=33525.51, stdev=2529.85 00:35:49.931 clat percentiles (usec): 00:35:49.931 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:49.931 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:35:49.931 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:35:49.931 | 99.00th=[38536], 99.50th=[44303], 99.90th=[71828], 99.95th=[71828], 00:35:49.931 | 99.99th=[71828] 00:35:49.931 bw ( KiB/s): min= 1536, max= 1920, per=4.15%, avg=1888.00, stdev=91.69, samples=20 00:35:49.931 iops : min= 384, max= 480, avg=472.00, stdev=22.92, samples=20 00:35:49.931 lat (msec) : 50=99.66%, 100=0.34% 00:35:49.931 cpu : usr=97.72%, sys=1.58%, ctx=108, majf=0, minf=9 00:35:49.931 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:49.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.931 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.932 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.932 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:49.932 00:35:49.932 Run status group 0 (all jobs): 00:35:49.932 READ: bw=44.4MiB/s (46.6MB/s), 1892KiB/s-1930KiB/s (1938kB/s-1977kB/s), io=446MiB (468MB), run=10001-10045msec 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 bdev_null0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 [2024-07-15 16:34:31.641725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 bdev_null1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:49.932 { 00:35:49.932 "params": { 00:35:49.932 "name": "Nvme$subsystem", 00:35:49.932 "trtype": "$TEST_TRANSPORT", 00:35:49.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.932 "adrfam": "ipv4", 00:35:49.932 "trsvcid": "$NVMF_PORT", 00:35:49.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.932 "hdgst": ${hdgst:-false}, 00:35:49.932 "ddgst": ${ddgst:-false} 00:35:49.932 }, 00:35:49.932 "method": "bdev_nvme_attach_controller" 00:35:49.932 } 00:35:49.932 EOF 00:35:49.932 )") 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:49.932 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:49.933 { 00:35:49.933 "params": { 00:35:49.933 "name": "Nvme$subsystem", 00:35:49.933 "trtype": "$TEST_TRANSPORT", 00:35:49.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.933 "adrfam": "ipv4", 00:35:49.933 "trsvcid": "$NVMF_PORT", 00:35:49.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.933 "hdgst": ${hdgst:-false}, 00:35:49.933 "ddgst": ${ddgst:-false} 00:35:49.933 }, 00:35:49.933 "method": "bdev_nvme_attach_controller" 00:35:49.933 } 00:35:49.933 EOF 00:35:49.933 )") 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:49.933 "params": { 00:35:49.933 "name": "Nvme0", 00:35:49.933 "trtype": "tcp", 00:35:49.933 "traddr": "10.0.0.2", 00:35:49.933 "adrfam": "ipv4", 00:35:49.933 "trsvcid": "4420", 00:35:49.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.933 "hdgst": false, 00:35:49.933 "ddgst": false 00:35:49.933 }, 00:35:49.933 "method": "bdev_nvme_attach_controller" 00:35:49.933 },{ 00:35:49.933 "params": { 00:35:49.933 "name": "Nvme1", 00:35:49.933 "trtype": "tcp", 00:35:49.933 "traddr": "10.0.0.2", 00:35:49.933 "adrfam": "ipv4", 00:35:49.933 "trsvcid": "4420", 00:35:49.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:49.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:49.933 "hdgst": false, 00:35:49.933 "ddgst": false 00:35:49.933 }, 00:35:49.933 "method": "bdev_nvme_attach_controller" 00:35:49.933 }' 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.933 16:34:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.933 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:49.933 ... 00:35:49.933 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:49.933 ... 00:35:49.933 fio-3.35 00:35:49.933 Starting 4 threads 00:35:49.933 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.194 00:35:55.194 filename0: (groupid=0, jobs=1): err= 0: pid=494670: Mon Jul 15 16:34:37 2024 00:35:55.194 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.6MiB/5002msec) 00:35:55.194 slat (nsec): min=5078, max=84105, avg=19904.85, stdev=11571.81 00:35:55.194 clat (usec): min=735, max=6952, avg=4068.91, stdev=431.18 00:35:55.194 lat (usec): min=792, max=6970, avg=4088.82, stdev=431.77 00:35:55.194 clat percentiles (usec): 00:35:55.194 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3818], 00:35:55.194 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:55.194 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:35:55.194 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6456], 99.95th=[ 6783], 00:35:55.194 | 99.99th=[ 6980] 00:35:55.194 bw ( KiB/s): min=14592, max=16208, per=25.47%, avg=15476.80, stdev=433.09, samples=10 00:35:55.194 iops : min= 1824, max= 2026, avg=1934.60, stdev=54.14, samples=10 00:35:55.194 lat (usec) : 750=0.01% 00:35:55.194 lat (msec) : 2=0.03%, 4=39.54%, 10=60.42% 00:35:55.194 cpu : usr=91.52%, sys=5.88%, ctx=340, majf=0, minf=68 00:35:55.194 IO depths : 1=0.4%, 2=9.6%, 4=63.3%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 issued rwts: total=9679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.194 filename0: (groupid=0, jobs=1): err= 0: pid=494671: Mon Jul 15 16:34:37 2024 00:35:55.194 read: IOPS=1890, BW=14.8MiB/s (15.5MB/s)(74.4MiB/5041msec) 00:35:55.194 slat (nsec): min=5510, max=78578, avg=22690.51, stdev=12832.64 00:35:55.194 clat (usec): min=728, max=41562, avg=4116.26, stdev=609.77 00:35:55.194 lat (usec): min=747, max=41575, avg=4138.95, stdev=609.44 00:35:55.194 clat percentiles (usec): 00:35:55.194 | 1.00th=[ 2900], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 3818], 00:35:55.194 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:55.194 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4752], 00:35:55.194 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 6980], 99.95th=[ 7242], 00:35:55.194 | 99.99th=[41681] 00:35:55.194 bw ( KiB/s): min=14592, max=16384, per=25.09%, avg=15244.80, stdev=501.95, samples=10 00:35:55.194 iops : min= 1824, max= 2048, avg=1905.60, stdev=62.74, samples=10 00:35:55.194 lat (usec) : 750=0.01% 00:35:55.194 lat (msec) : 2=0.23%, 4=39.03%, 10=60.72%, 50=0.01% 00:35:55.194 cpu : usr=94.03%, sys=5.02%, ctx=69, majf=0, minf=33 00:35:55.194 IO depths : 1=1.0%, 2=16.8%, 4=57.1%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 issued rwts: total=9529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.194 filename1: (groupid=0, jobs=1): err= 0: pid=494672: Mon Jul 15 16:34:37 2024 00:35:55.194 read: IOPS=1891, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5001msec) 00:35:55.194 slat (nsec): min=3924, max=83063, avg=22530.02, stdev=12987.11 00:35:55.194 clat (usec): min=838, max=7857, avg=4143.93, stdev=493.12 00:35:55.194 lat (usec): min=854, max=7873, avg=4166.46, stdev=492.53 00:35:55.194 clat percentiles (usec): 00:35:55.194 | 1.00th=[ 3064], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3851], 00:35:55.194 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:55.194 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4817], 00:35:55.194 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7046], 99.95th=[ 7373], 00:35:55.194 | 99.99th=[ 7832] 00:35:55.194 bw ( KiB/s): min=14592, max=16416, per=24.88%, avg=15116.44, stdev=558.96, samples=9 00:35:55.194 iops : min= 1824, max= 2052, avg=1889.56, stdev=69.87, samples=9 00:35:55.194 lat (usec) : 1000=0.04% 00:35:55.194 lat (msec) : 2=0.41%, 4=37.00%, 10=62.54% 00:35:55.194 cpu : usr=94.90%, sys=4.56%, ctx=9, majf=0, minf=33 00:35:55.194 IO depths : 1=0.3%, 2=16.6%, 4=57.5%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 issued rwts: total=9461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.194 filename1: (groupid=0, jobs=1): err= 0: pid=494673: Mon Jul 15 16:34:37 2024 00:35:55.194 read: IOPS=1922, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5003msec) 00:35:55.194 slat (nsec): min=4516, max=84194, avg=20852.35, stdev=11251.73 00:35:55.194 clat (usec): min=773, max=7684, avg=4089.11, stdev=475.84 00:35:55.194 lat (usec): min=787, max=7707, avg=4109.96, stdev=476.24 00:35:55.194 clat percentiles (usec): 00:35:55.194 | 1.00th=[ 2868], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3818], 00:35:55.194 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:35:55.194 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:35:55.194 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7177], 00:35:55.194 | 99.99th=[ 7701] 00:35:55.194 bw ( KiB/s): min=14592, max=16256, per=25.31%, avg=15377.60, stdev=452.26, samples=10 00:35:55.194 iops : min= 1824, max= 2032, avg=1922.20, stdev=56.53, samples=10 00:35:55.194 lat (usec) : 1000=0.02% 00:35:55.194 lat (msec) : 2=0.10%, 4=39.69%, 10=60.19% 00:35:55.194 cpu : usr=94.40%, sys=4.78%, ctx=39, majf=0, minf=64 00:35:55.194 IO depths : 1=0.5%, 2=14.1%, 4=59.5%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.194 issued rwts: total=9618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.194 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:55.194 00:35:55.194 Run status group 0 (all jobs): 00:35:55.194 READ: bw=59.3MiB/s (62.2MB/s), 14.8MiB/s-15.1MiB/s (15.5MB/s-15.9MB/s), io=299MiB (314MB), run=5001-5041msec 00:35:55.194 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:55.194 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.194 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.194 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 00:35:55.195 real 0m24.047s 00:35:55.195 user 4m28.052s 00:35:55.195 sys 0m8.972s 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 ************************************ 00:35:55.195 END TEST fio_dif_rand_params 00:35:55.195 ************************************ 00:35:55.195 16:34:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:55.195 16:34:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:55.195 16:34:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 ************************************ 00:35:55.195 START TEST fio_dif_digest 00:35:55.195 ************************************ 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 bdev_null0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.195 [2024-07-15 16:34:37.924367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:55.195 { 00:35:55.195 "params": { 00:35:55.195 "name": "Nvme$subsystem", 00:35:55.195 "trtype": "$TEST_TRANSPORT", 00:35:55.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.195 "adrfam": "ipv4", 00:35:55.195 "trsvcid": "$NVMF_PORT", 00:35:55.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.195 "hdgst": ${hdgst:-false}, 00:35:55.195 "ddgst": ${ddgst:-false} 00:35:55.195 }, 00:35:55.195 "method": "bdev_nvme_attach_controller" 00:35:55.195 } 00:35:55.195 EOF 00:35:55.195 )") 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:55.195 "params": { 00:35:55.195 "name": "Nvme0", 00:35:55.195 "trtype": "tcp", 00:35:55.195 "traddr": "10.0.0.2", 00:35:55.195 "adrfam": "ipv4", 00:35:55.195 "trsvcid": "4420", 00:35:55.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.195 "hdgst": true, 00:35:55.195 "ddgst": true 00:35:55.195 }, 00:35:55.195 "method": "bdev_nvme_attach_controller" 00:35:55.195 }' 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.195 16:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.453 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:55.453 ... 00:35:55.453 fio-3.35 00:35:55.453 Starting 3 threads 00:35:55.453 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.652 00:36:07.652 filename0: (groupid=0, jobs=1): err= 0: pid=495422: Mon Jul 15 16:34:48 2024 00:36:07.652 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10047msec) 00:36:07.652 slat (nsec): min=5169, max=38909, avg=14597.38, stdev=3728.81 00:36:07.652 clat (usec): min=11563, max=57099, avg=15327.89, stdev=3201.91 00:36:07.652 lat (usec): min=11577, max=57112, avg=15342.48, stdev=3201.99 00:36:07.652 clat percentiles (usec): 00:36:07.652 | 1.00th=[12649], 5.00th=[13435], 10.00th=[13698], 20.00th=[14222], 00:36:07.652 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:36:07.652 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:36:07.652 | 99.00th=[17957], 99.50th=[53740], 99.90th=[56361], 99.95th=[56886], 00:36:07.652 | 99.99th=[56886] 00:36:07.652 bw ( KiB/s): min=22528, max=26368, per=31.87%, avg=25075.20, stdev=970.30, samples=20 00:36:07.652 iops : min= 176, max= 206, avg=195.90, stdev= 7.58, samples=20 00:36:07.652 lat (msec) : 20=99.39%, 50=0.05%, 100=0.56% 00:36:07.652 cpu : usr=89.07%, sys=10.43%, ctx=20, majf=0, minf=141 00:36:07.652 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 issued rwts: total=1961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.652 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.652 filename0: (groupid=0, jobs=1): err= 0: pid=495423: Mon Jul 15 16:34:48 2024 00:36:07.652 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10007msec) 00:36:07.652 slat (nsec): min=4965, max=45396, avg=14585.10, stdev=3922.58 00:36:07.652 clat (usec): min=8233, max=19625, avg=14081.70, stdev=1208.69 00:36:07.652 lat (usec): min=8246, max=19641, avg=14096.29, stdev=1208.64 00:36:07.652 clat percentiles (usec): 00:36:07.652 | 1.00th=[ 9765], 5.00th=[12256], 10.00th=[12780], 20.00th=[13304], 00:36:07.652 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:36:07.652 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:36:07.652 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:36:07.652 | 99.99th=[19530] 00:36:07.652 bw ( KiB/s): min=26368, max=29184, per=34.59%, avg=27212.80, stdev=649.23, samples=20 00:36:07.652 iops : min= 206, max= 228, avg=212.60, stdev= 5.07, samples=20 00:36:07.652 lat (msec) : 10=1.41%, 20=98.59% 00:36:07.652 cpu : usr=88.63%, sys=10.86%, ctx=28, majf=0, minf=109 00:36:07.652 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.652 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.652 filename0: (groupid=0, jobs=1): err= 0: pid=495424: Mon Jul 15 16:34:48 2024 00:36:07.652 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10048msec) 00:36:07.652 slat (nsec): min=4789, max=43550, avg=14306.55, stdev=3865.73 00:36:07.652 clat (usec): min=9290, max=51530, avg=14411.67, stdev=1587.79 00:36:07.652 lat (usec): min=9303, max=51544, avg=14425.98, stdev=1587.71 00:36:07.652 clat percentiles (usec): 00:36:07.652 | 1.00th=[10814], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:36:07.652 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:36:07.652 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16188], 00:36:07.652 | 99.00th=[17171], 99.50th=[17695], 99.90th=[23462], 99.95th=[47449], 00:36:07.652 | 99.99th=[51643] 00:36:07.652 bw ( KiB/s): min=25344, max=27904, per=33.89%, avg=26664.95, stdev=672.03, samples=20 00:36:07.652 iops : min= 198, max= 218, avg=208.30, stdev= 5.28, samples=20 00:36:07.652 lat (msec) : 10=0.48%, 20=99.33%, 50=0.14%, 100=0.05% 00:36:07.652 cpu : usr=88.33%, sys=11.18%, ctx=21, majf=0, minf=129 00:36:07.652 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.652 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.652 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:07.652 00:36:07.652 Run status group 0 (all jobs): 00:36:07.652 READ: bw=76.8MiB/s (80.6MB/s), 24.4MiB/s-26.6MiB/s (25.6MB/s-27.9MB/s), io=772MiB (810MB), run=10007-10048msec 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.652 00:36:07.652 real 0m11.017s 00:36:07.652 user 0m27.909s 00:36:07.652 sys 0m3.512s 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:07.652 16:34:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.652 ************************************ 00:36:07.652 END TEST fio_dif_digest 00:36:07.652 ************************************ 00:36:07.652 16:34:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:07.652 16:34:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:07.652 rmmod nvme_tcp 00:36:07.652 rmmod nvme_fabrics 00:36:07.652 rmmod nvme_keyring 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 489370 ']' 00:36:07.652 16:34:48 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 489370 00:36:07.652 16:34:48 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 489370 ']' 00:36:07.652 16:34:48 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 489370 00:36:07.652 16:34:48 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:07.652 16:34:48 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:07.652 16:34:48 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 489370 00:36:07.652 16:34:49 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:07.652 16:34:49 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:07.653 16:34:49 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 489370' 00:36:07.653 killing process with pid 489370 00:36:07.653 16:34:49 nvmf_dif -- common/autotest_common.sh@965 -- # kill 489370 00:36:07.653 16:34:49 nvmf_dif -- common/autotest_common.sh@970 -- # wait 489370 00:36:07.653 16:34:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:07.653 16:34:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:07.653 Waiting for block devices as requested 00:36:07.653 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:36:07.653 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:07.653 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:07.653 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:07.911 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:07.911 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:07.911 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.911 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.171 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.171 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:08.171 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:08.171 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:08.434 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:08.434 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:08.434 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.693 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:08.693 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:08.693 16:34:51 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:08.693 16:34:51 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:08.693 16:34:51 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:08.693 16:34:51 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:08.693 16:34:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.693 16:34:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:08.693 16:34:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.222 16:34:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:11.222 00:36:11.222 real 1m6.332s 00:36:11.222 user 6m22.326s 00:36:11.222 sys 0m22.431s 00:36:11.222 16:34:53 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:11.222 16:34:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:11.222 ************************************ 00:36:11.222 END TEST nvmf_dif 00:36:11.222 ************************************ 00:36:11.222 16:34:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:11.222 16:34:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:11.222 16:34:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:11.222 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:36:11.222 ************************************ 00:36:11.222 START TEST nvmf_abort_qd_sizes 00:36:11.222 ************************************ 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:11.222 * Looking for test storage... 00:36:11.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.222 16:34:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:11.223 16:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:13.122 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:13.122 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:13.122 Found net devices under 0000:84:00.0: cvl_0_0 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:13.122 Found net devices under 0000:84:00.1: cvl_0_1 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:13.122 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:13.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:36:13.123 00:36:13.123 --- 10.0.0.2 ping statistics --- 00:36:13.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.123 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:36:13.123 00:36:13.123 --- 10.0.0.1 ping statistics --- 00:36:13.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.123 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:13.123 16:34:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.059 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.059 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.059 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.059 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.059 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.318 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.318 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.318 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.318 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:15.250 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=500240 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 500240 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 500240 ']' 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:15.250 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.250 [2024-07-15 16:34:58.217160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:15.250 [2024-07-15 16:34:58.217233] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.508 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.508 [2024-07-15 16:34:58.285265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:15.508 [2024-07-15 16:34:58.371480] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.508 [2024-07-15 16:34:58.371547] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.508 [2024-07-15 16:34:58.371570] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.508 [2024-07-15 16:34:58.371581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.508 [2024-07-15 16:34:58.371591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.508 [2024-07-15 16:34:58.371639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.508 [2024-07-15 16:34:58.371694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:15.508 [2024-07-15 16:34:58.371765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.508 [2024-07-15 16:34:58.371761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:15.768 16:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.768 ************************************ 00:36:15.768 START TEST spdk_target_abort 00:36:15.768 ************************************ 00:36:15.768 16:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:15.768 16:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:15.768 16:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:36:15.768 16:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.768 16:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.051 spdk_targetn1 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.051 [2024-07-15 16:35:01.373805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.051 [2024-07-15 16:35:01.406121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:19.051 16:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.051 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.345 Initializing NVMe Controllers 00:36:22.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:22.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:22.345 Initialization complete. Launching workers. 00:36:22.345 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11524, failed: 0 00:36:22.345 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1344, failed to submit 10180 00:36:22.345 success 767, unsuccess 577, failed 0 00:36:22.345 16:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:22.345 16:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:22.345 EAL: No free 2048 kB hugepages reported on node 1 00:36:25.631 Initializing NVMe Controllers 00:36:25.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:25.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:25.631 Initialization complete. Launching workers. 00:36:25.631 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8606, failed: 0 00:36:25.631 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7340 00:36:25.631 success 297, unsuccess 969, failed 0 00:36:25.631 16:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:25.631 16:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.631 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.159 Initializing NVMe Controllers 00:36:28.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:28.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:28.159 Initialization complete. Launching workers. 00:36:28.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31591, failed: 0 00:36:28.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2692, failed to submit 28899 00:36:28.159 success 543, unsuccess 2149, failed 0 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.159 16:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 500240 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 500240 ']' 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 500240 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 500240 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 500240' 00:36:29.535 killing process with pid 500240 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 500240 00:36:29.535 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 500240 00:36:29.794 00:36:29.794 real 0m14.174s 00:36:29.794 user 0m53.532s 00:36:29.794 sys 0m2.777s 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.794 ************************************ 00:36:29.794 END TEST spdk_target_abort 00:36:29.794 ************************************ 00:36:29.794 16:35:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:29.794 16:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:29.794 16:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:29.794 16:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.794 ************************************ 00:36:29.794 START TEST kernel_target_abort 00:36:29.794 ************************************ 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:29.794 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:30.052 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:30.052 16:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:30.984 Waiting for block devices as requested 00:36:30.984 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:36:30.984 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:31.243 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:31.243 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:31.243 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:31.503 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:31.503 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:31.503 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:31.503 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:31.773 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:31.773 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:31.773 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:31.773 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:32.031 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:32.031 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:32.031 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:32.031 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:32.289 No valid GPT data, bailing 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:36:32.289 00:36:32.289 Discovery Log Number of Records 2, Generation counter 2 00:36:32.289 =====Discovery Log Entry 0====== 00:36:32.289 trtype: tcp 00:36:32.289 adrfam: ipv4 00:36:32.289 subtype: current discovery subsystem 00:36:32.289 treq: not specified, sq flow control disable supported 00:36:32.289 portid: 1 00:36:32.289 trsvcid: 4420 00:36:32.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:32.289 traddr: 10.0.0.1 00:36:32.289 eflags: none 00:36:32.289 sectype: none 00:36:32.289 =====Discovery Log Entry 1====== 00:36:32.289 trtype: tcp 00:36:32.289 adrfam: ipv4 00:36:32.289 subtype: nvme subsystem 00:36:32.289 treq: not specified, sq flow control disable supported 00:36:32.289 portid: 1 00:36:32.289 trsvcid: 4420 00:36:32.289 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:32.289 traddr: 10.0.0.1 00:36:32.289 eflags: none 00:36:32.289 sectype: none 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:32.289 16:35:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.289 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.576 Initializing NVMe Controllers 00:36:35.576 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:35.576 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:35.576 Initialization complete. Launching workers. 00:36:35.576 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40476, failed: 0 00:36:35.576 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40476, failed to submit 0 00:36:35.576 success 0, unsuccess 40476, failed 0 00:36:35.576 16:35:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.576 16:35:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.576 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.858 Initializing NVMe Controllers 00:36:38.858 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.858 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.858 Initialization complete. Launching workers. 00:36:38.858 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75780, failed: 0 00:36:38.858 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19102, failed to submit 56678 00:36:38.858 success 0, unsuccess 19102, failed 0 00:36:38.858 16:35:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.858 16:35:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.858 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.177 Initializing NVMe Controllers 00:36:42.177 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:42.177 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:42.177 Initialization complete. Launching workers. 00:36:42.177 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78246, failed: 0 00:36:42.177 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19530, failed to submit 58716 00:36:42.177 success 0, unsuccess 19530, failed 0 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:42.177 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:42.178 16:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:43.112 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:43.112 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:43.112 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:44.051 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:36:44.051 00:36:44.051 real 0m14.119s 00:36:44.051 user 0m5.956s 00:36:44.051 sys 0m3.232s 00:36:44.051 16:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:44.051 16:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.051 ************************************ 00:36:44.051 END TEST kernel_target_abort 00:36:44.051 ************************************ 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:44.051 rmmod nvme_tcp 00:36:44.051 rmmod nvme_fabrics 00:36:44.051 rmmod nvme_keyring 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 500240 ']' 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 500240 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 500240 ']' 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 500240 00:36:44.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (500240) - No such process 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 500240 is not found' 00:36:44.051 Process with pid 500240 is not found 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:44.051 16:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:45.426 Waiting for block devices as requested 00:36:45.426 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:36:45.426 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.426 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:45.683 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:45.683 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:45.683 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:45.683 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:45.940 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:45.940 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:45.940 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.940 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:46.197 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:46.197 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:46.197 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:46.197 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:46.455 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:46.455 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.455 16:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.987 16:35:31 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:48.987 00:36:48.987 real 0m37.676s 00:36:48.987 user 1m1.604s 00:36:48.987 sys 0m9.408s 00:36:48.987 16:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:48.987 16:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:48.987 ************************************ 00:36:48.987 END TEST nvmf_abort_qd_sizes 00:36:48.987 ************************************ 00:36:48.987 16:35:31 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:48.987 16:35:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:48.987 16:35:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:48.987 16:35:31 -- common/autotest_common.sh@10 -- # set +x 00:36:48.987 ************************************ 00:36:48.987 START TEST keyring_file 00:36:48.987 ************************************ 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:48.987 * Looking for test storage... 00:36:48.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.987 16:35:31 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.987 16:35:31 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.987 16:35:31 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.987 16:35:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.987 16:35:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.987 16:35:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.987 16:35:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:48.987 16:35:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tjGqXMWA5a 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tjGqXMWA5a 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tjGqXMWA5a 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tjGqXMWA5a 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vJbONtKdqD 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:48.987 16:35:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vJbONtKdqD 00:36:48.987 16:35:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vJbONtKdqD 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vJbONtKdqD 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=506002 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:48.987 16:35:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 506002 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 506002 ']' 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:48.987 16:35:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:48.987 [2024-07-15 16:35:31.634099] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:48.987 [2024-07-15 16:35:31.634177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506002 ] 00:36:48.987 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.987 [2024-07-15 16:35:31.696209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.987 [2024-07-15 16:35:31.787950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.244 16:35:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:49.245 16:35:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:49.245 [2024-07-15 16:35:32.034087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.245 null0 00:36:49.245 [2024-07-15 16:35:32.066157] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:49.245 [2024-07-15 16:35:32.066678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:49.245 [2024-07-15 16:35:32.074176] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.245 16:35:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:49.245 [2024-07-15 16:35:32.082165] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:49.245 request: 00:36:49.245 { 00:36:49.245 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.245 "secure_channel": false, 00:36:49.245 "listen_address": { 00:36:49.245 "trtype": "tcp", 00:36:49.245 "traddr": "127.0.0.1", 00:36:49.245 "trsvcid": "4420" 00:36:49.245 }, 00:36:49.245 "method": "nvmf_subsystem_add_listener", 00:36:49.245 "req_id": 1 00:36:49.245 } 00:36:49.245 Got JSON-RPC error response 00:36:49.245 response: 00:36:49.245 { 00:36:49.245 "code": -32602, 00:36:49.245 "message": "Invalid parameters" 00:36:49.245 } 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:49.245 16:35:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=506007 00:36:49.245 16:35:32 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:49.245 16:35:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 506007 /var/tmp/bperf.sock 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 506007 ']' 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:49.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:49.245 16:35:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:49.245 [2024-07-15 16:35:32.130363] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:49.245 [2024-07-15 16:35:32.130424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506007 ] 00:36:49.245 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.245 [2024-07-15 16:35:32.191557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.503 [2024-07-15 16:35:32.283455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.503 16:35:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:49.503 16:35:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:49.503 16:35:32 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:49.503 16:35:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:49.760 16:35:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vJbONtKdqD 00:36:49.760 16:35:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vJbONtKdqD 00:36:50.018 16:35:32 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:50.018 16:35:32 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:50.018 16:35:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.018 16:35:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.018 16:35:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.275 16:35:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.tjGqXMWA5a == \/\t\m\p\/\t\m\p\.\t\j\G\q\X\M\W\A\5\a ]] 00:36:50.275 16:35:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:50.275 16:35:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:50.275 16:35:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.275 16:35:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.275 16:35:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:50.532 16:35:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vJbONtKdqD == \/\t\m\p\/\t\m\p\.\v\J\b\O\N\t\K\d\q\D ]] 00:36:50.532 16:35:33 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:50.532 16:35:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.532 16:35:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.532 16:35:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.532 16:35:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.532 16:35:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.789 16:35:33 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:50.789 16:35:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:50.789 16:35:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.789 16:35:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.789 16:35:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.789 16:35:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.789 16:35:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:51.047 16:35:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:51.047 16:35:33 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.047 16:35:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.304 [2024-07-15 16:35:34.115166] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.304 nvme0n1 00:36:51.304 16:35:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:51.304 16:35:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:51.304 16:35:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.304 16:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.304 16:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.304 16:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.561 16:35:34 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:51.561 16:35:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:51.561 16:35:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:51.561 16:35:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.561 16:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.561 16:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.561 16:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:51.818 16:35:34 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:51.818 16:35:34 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:51.818 Running I/O for 1 seconds... 00:36:53.194 00:36:53.194 Latency(us) 00:36:53.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.194 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:53.194 nvme0n1 : 1.01 8024.70 31.35 0.00 0.00 15870.30 9417.77 31068.92 00:36:53.194 =================================================================================================================== 00:36:53.194 Total : 8024.70 31.35 0.00 0.00 15870.30 9417.77 31068.92 00:36:53.194 0 00:36:53.194 16:35:35 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:53.194 16:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:53.194 16:35:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:53.194 16:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.194 16:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.194 16:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.194 16:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.195 16:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.452 16:35:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:53.452 16:35:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:53.452 16:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.452 16:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.452 16:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.452 16:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.452 16:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.709 16:35:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:53.709 16:35:36 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:53.709 16:35:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.709 16:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:53.966 [2024-07-15 16:35:36.796129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:53.966 [2024-07-15 16:35:36.796965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x750730 (107): Transport endpoint is not connected 00:36:53.966 [2024-07-15 16:35:36.797958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x750730 (9): Bad file descriptor 00:36:53.966 [2024-07-15 16:35:36.798957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:53.967 [2024-07-15 16:35:36.798975] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:53.967 [2024-07-15 16:35:36.798988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:53.967 request: 00:36:53.967 { 00:36:53.967 "name": "nvme0", 00:36:53.967 "trtype": "tcp", 00:36:53.967 "traddr": "127.0.0.1", 00:36:53.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.967 "adrfam": "ipv4", 00:36:53.967 "trsvcid": "4420", 00:36:53.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.967 "psk": "key1", 00:36:53.967 "method": "bdev_nvme_attach_controller", 00:36:53.967 "req_id": 1 00:36:53.967 } 00:36:53.967 Got JSON-RPC error response 00:36:53.967 response: 00:36:53.967 { 00:36:53.967 "code": -5, 00:36:53.967 "message": "Input/output error" 00:36:53.967 } 00:36:53.967 16:35:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:53.967 16:35:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:53.967 16:35:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:53.967 16:35:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:53.967 16:35:36 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:53.967 16:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.967 16:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.967 16:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.967 16:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.967 16:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:54.223 16:35:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:54.223 16:35:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:54.223 16:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:54.223 16:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:54.223 16:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.223 16:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.223 16:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.481 16:35:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:54.481 16:35:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:54.481 16:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:54.737 16:35:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:54.737 16:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:54.994 16:35:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:54.994 16:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.994 16:35:37 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:55.251 16:35:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:55.251 16:35:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.tjGqXMWA5a 00:36:55.251 16:35:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:55.251 16:35:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.251 16:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.507 [2024-07-15 16:35:38.283911] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tjGqXMWA5a': 0100660 00:36:55.507 [2024-07-15 16:35:38.283948] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:55.507 request: 00:36:55.507 { 00:36:55.507 "name": "key0", 00:36:55.507 "path": "/tmp/tmp.tjGqXMWA5a", 00:36:55.507 "method": "keyring_file_add_key", 00:36:55.507 "req_id": 1 00:36:55.507 } 00:36:55.507 Got JSON-RPC error response 00:36:55.507 response: 00:36:55.507 { 00:36:55.507 "code": -1, 00:36:55.507 "message": "Operation not permitted" 00:36:55.507 } 00:36:55.507 16:35:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:55.507 16:35:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:55.507 16:35:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:55.507 16:35:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:55.507 16:35:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.tjGqXMWA5a 00:36:55.507 16:35:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.507 16:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tjGqXMWA5a 00:36:55.763 16:35:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.tjGqXMWA5a 00:36:55.763 16:35:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:55.763 16:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.763 16:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.763 16:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.763 16:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.763 16:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.019 16:35:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:56.019 16:35:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.019 16:35:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.019 16:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.277 [2024-07-15 16:35:39.005877] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tjGqXMWA5a': No such file or directory 00:36:56.277 [2024-07-15 16:35:39.005908] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:56.277 [2024-07-15 16:35:39.005934] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:56.277 [2024-07-15 16:35:39.005946] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:56.277 [2024-07-15 16:35:39.005958] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:56.277 request: 00:36:56.277 { 00:36:56.277 "name": "nvme0", 00:36:56.277 "trtype": "tcp", 00:36:56.277 "traddr": "127.0.0.1", 00:36:56.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.277 "adrfam": "ipv4", 00:36:56.277 "trsvcid": "4420", 00:36:56.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.277 "psk": "key0", 00:36:56.277 "method": "bdev_nvme_attach_controller", 00:36:56.277 "req_id": 1 00:36:56.277 } 00:36:56.277 Got JSON-RPC error response 00:36:56.277 response: 00:36:56.277 { 00:36:56.277 "code": -19, 00:36:56.277 "message": "No such device" 00:36:56.277 } 00:36:56.277 16:35:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:56.277 16:35:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:56.277 16:35:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:56.277 16:35:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:56.277 16:35:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.277 16:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:56.534 16:35:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F8LnaHqGNj 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:56.534 16:35:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F8LnaHqGNj 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F8LnaHqGNj 00:36:56.534 16:35:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.F8LnaHqGNj 00:36:56.534 16:35:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.F8LnaHqGNj 00:36:56.534 16:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.F8LnaHqGNj 00:36:56.790 16:35:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.790 16:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:57.047 nvme0n1 00:36:57.047 16:35:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:57.047 16:35:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:57.047 16:35:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:57.047 16:35:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.047 16:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.047 16:35:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.304 16:35:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:57.304 16:35:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:57.304 16:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:57.561 16:35:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:57.561 16:35:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:57.561 16:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.561 16:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.561 16:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:57.818 16:35:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:57.818 16:35:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:57.818 16:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:57.818 16:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:57.818 16:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:57.818 16:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.818 16:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.074 16:35:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:58.074 16:35:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:58.074 16:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:58.330 16:35:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:58.330 16:35:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:58.330 16:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.587 16:35:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:58.587 16:35:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.F8LnaHqGNj 00:36:58.587 16:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.F8LnaHqGNj 00:36:58.845 16:35:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vJbONtKdqD 00:36:58.845 16:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vJbONtKdqD 00:36:59.102 16:35:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.102 16:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.360 nvme0n1 00:36:59.360 16:35:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:59.360 16:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:59.617 16:35:42 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:59.617 "subsystems": [ 00:36:59.617 { 00:36:59.617 "subsystem": "keyring", 00:36:59.617 "config": [ 00:36:59.617 { 00:36:59.617 "method": "keyring_file_add_key", 00:36:59.617 "params": { 00:36:59.617 "name": "key0", 00:36:59.617 "path": "/tmp/tmp.F8LnaHqGNj" 00:36:59.617 } 00:36:59.617 }, 00:36:59.617 { 00:36:59.617 "method": "keyring_file_add_key", 00:36:59.617 "params": { 00:36:59.617 "name": "key1", 00:36:59.617 "path": "/tmp/tmp.vJbONtKdqD" 00:36:59.617 } 00:36:59.617 } 00:36:59.617 ] 00:36:59.617 }, 00:36:59.617 { 00:36:59.617 "subsystem": "iobuf", 00:36:59.617 "config": [ 00:36:59.617 { 00:36:59.617 "method": "iobuf_set_options", 00:36:59.617 "params": { 00:36:59.617 "small_pool_count": 8192, 00:36:59.617 "large_pool_count": 1024, 00:36:59.617 "small_bufsize": 8192, 00:36:59.617 "large_bufsize": 135168 00:36:59.617 } 00:36:59.617 } 00:36:59.617 ] 00:36:59.617 }, 00:36:59.617 { 00:36:59.617 "subsystem": "sock", 00:36:59.618 "config": [ 00:36:59.618 { 00:36:59.618 "method": "sock_set_default_impl", 00:36:59.618 "params": { 00:36:59.618 "impl_name": "posix" 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "sock_impl_set_options", 00:36:59.618 "params": { 00:36:59.618 "impl_name": "ssl", 00:36:59.618 "recv_buf_size": 4096, 00:36:59.618 "send_buf_size": 4096, 00:36:59.618 "enable_recv_pipe": true, 00:36:59.618 "enable_quickack": false, 00:36:59.618 "enable_placement_id": 0, 00:36:59.618 "enable_zerocopy_send_server": true, 00:36:59.618 "enable_zerocopy_send_client": false, 00:36:59.618 "zerocopy_threshold": 0, 00:36:59.618 "tls_version": 0, 00:36:59.618 "enable_ktls": false 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "sock_impl_set_options", 00:36:59.618 "params": { 00:36:59.618 "impl_name": "posix", 00:36:59.618 "recv_buf_size": 2097152, 00:36:59.618 "send_buf_size": 2097152, 00:36:59.618 "enable_recv_pipe": true, 00:36:59.618 "enable_quickack": false, 00:36:59.618 "enable_placement_id": 0, 00:36:59.618 "enable_zerocopy_send_server": true, 00:36:59.618 "enable_zerocopy_send_client": false, 00:36:59.618 "zerocopy_threshold": 0, 00:36:59.618 "tls_version": 0, 00:36:59.618 "enable_ktls": false 00:36:59.618 } 00:36:59.618 } 00:36:59.618 ] 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "subsystem": "vmd", 00:36:59.618 "config": [] 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "subsystem": "accel", 00:36:59.618 "config": [ 00:36:59.618 { 00:36:59.618 "method": "accel_set_options", 00:36:59.618 "params": { 00:36:59.618 "small_cache_size": 128, 00:36:59.618 "large_cache_size": 16, 00:36:59.618 "task_count": 2048, 00:36:59.618 "sequence_count": 2048, 00:36:59.618 "buf_count": 2048 00:36:59.618 } 00:36:59.618 } 00:36:59.618 ] 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "subsystem": "bdev", 00:36:59.618 "config": [ 00:36:59.618 { 00:36:59.618 "method": "bdev_set_options", 00:36:59.618 "params": { 00:36:59.618 "bdev_io_pool_size": 65535, 00:36:59.618 "bdev_io_cache_size": 256, 00:36:59.618 "bdev_auto_examine": true, 00:36:59.618 "iobuf_small_cache_size": 128, 00:36:59.618 "iobuf_large_cache_size": 16 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_raid_set_options", 00:36:59.618 "params": { 00:36:59.618 "process_window_size_kb": 1024 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_iscsi_set_options", 00:36:59.618 "params": { 00:36:59.618 "timeout_sec": 30 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_nvme_set_options", 00:36:59.618 "params": { 00:36:59.618 "action_on_timeout": "none", 00:36:59.618 "timeout_us": 0, 00:36:59.618 "timeout_admin_us": 0, 00:36:59.618 "keep_alive_timeout_ms": 10000, 00:36:59.618 "arbitration_burst": 0, 00:36:59.618 "low_priority_weight": 0, 00:36:59.618 "medium_priority_weight": 0, 00:36:59.618 "high_priority_weight": 0, 00:36:59.618 "nvme_adminq_poll_period_us": 10000, 00:36:59.618 "nvme_ioq_poll_period_us": 0, 00:36:59.618 "io_queue_requests": 512, 00:36:59.618 "delay_cmd_submit": true, 00:36:59.618 "transport_retry_count": 4, 00:36:59.618 "bdev_retry_count": 3, 00:36:59.618 "transport_ack_timeout": 0, 00:36:59.618 "ctrlr_loss_timeout_sec": 0, 00:36:59.618 "reconnect_delay_sec": 0, 00:36:59.618 "fast_io_fail_timeout_sec": 0, 00:36:59.618 "disable_auto_failback": false, 00:36:59.618 "generate_uuids": false, 00:36:59.618 "transport_tos": 0, 00:36:59.618 "nvme_error_stat": false, 00:36:59.618 "rdma_srq_size": 0, 00:36:59.618 "io_path_stat": false, 00:36:59.618 "allow_accel_sequence": false, 00:36:59.618 "rdma_max_cq_size": 0, 00:36:59.618 "rdma_cm_event_timeout_ms": 0, 00:36:59.618 "dhchap_digests": [ 00:36:59.618 "sha256", 00:36:59.618 "sha384", 00:36:59.618 "sha512" 00:36:59.618 ], 00:36:59.618 "dhchap_dhgroups": [ 00:36:59.618 "null", 00:36:59.618 "ffdhe2048", 00:36:59.618 "ffdhe3072", 00:36:59.618 "ffdhe4096", 00:36:59.618 "ffdhe6144", 00:36:59.618 "ffdhe8192" 00:36:59.618 ] 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_nvme_attach_controller", 00:36:59.618 "params": { 00:36:59.618 "name": "nvme0", 00:36:59.618 "trtype": "TCP", 00:36:59.618 "adrfam": "IPv4", 00:36:59.618 "traddr": "127.0.0.1", 00:36:59.618 "trsvcid": "4420", 00:36:59.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.618 "prchk_reftag": false, 00:36:59.618 "prchk_guard": false, 00:36:59.618 "ctrlr_loss_timeout_sec": 0, 00:36:59.618 "reconnect_delay_sec": 0, 00:36:59.618 "fast_io_fail_timeout_sec": 0, 00:36:59.618 "psk": "key0", 00:36:59.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.618 "hdgst": false, 00:36:59.618 "ddgst": false 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_nvme_set_hotplug", 00:36:59.618 "params": { 00:36:59.618 "period_us": 100000, 00:36:59.618 "enable": false 00:36:59.618 } 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "method": "bdev_wait_for_examine" 00:36:59.618 } 00:36:59.618 ] 00:36:59.618 }, 00:36:59.618 { 00:36:59.618 "subsystem": "nbd", 00:36:59.618 "config": [] 00:36:59.618 } 00:36:59.618 ] 00:36:59.618 }' 00:36:59.618 16:35:42 keyring_file -- keyring/file.sh@114 -- # killprocess 506007 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 506007 ']' 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@950 -- # kill -0 506007 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 506007 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 506007' 00:36:59.618 killing process with pid 506007 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@965 -- # kill 506007 00:36:59.618 Received shutdown signal, test time was about 1.000000 seconds 00:36:59.618 00:36:59.618 Latency(us) 00:36:59.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.618 =================================================================================================================== 00:36:59.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:59.618 16:35:42 keyring_file -- common/autotest_common.sh@970 -- # wait 506007 00:36:59.876 16:35:42 keyring_file -- keyring/file.sh@117 -- # bperfpid=507443 00:36:59.876 16:35:42 keyring_file -- keyring/file.sh@119 -- # waitforlisten 507443 /var/tmp/bperf.sock 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 507443 ']' 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.876 16:35:42 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:59.876 16:35:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.876 16:35:42 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:59.876 "subsystems": [ 00:36:59.876 { 00:36:59.876 "subsystem": "keyring", 00:36:59.876 "config": [ 00:36:59.876 { 00:36:59.876 "method": "keyring_file_add_key", 00:36:59.876 "params": { 00:36:59.876 "name": "key0", 00:36:59.876 "path": "/tmp/tmp.F8LnaHqGNj" 00:36:59.876 } 00:36:59.876 }, 00:36:59.876 { 00:36:59.876 "method": "keyring_file_add_key", 00:36:59.876 "params": { 00:36:59.876 "name": "key1", 00:36:59.876 "path": "/tmp/tmp.vJbONtKdqD" 00:36:59.876 } 00:36:59.876 } 00:36:59.876 ] 00:36:59.876 }, 00:36:59.876 { 00:36:59.876 "subsystem": "iobuf", 00:36:59.876 "config": [ 00:36:59.876 { 00:36:59.876 "method": "iobuf_set_options", 00:36:59.876 "params": { 00:36:59.876 "small_pool_count": 8192, 00:36:59.876 "large_pool_count": 1024, 00:36:59.877 "small_bufsize": 8192, 00:36:59.877 "large_bufsize": 135168 00:36:59.877 } 00:36:59.877 } 00:36:59.877 ] 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "subsystem": "sock", 00:36:59.877 "config": [ 00:36:59.877 { 00:36:59.877 "method": "sock_set_default_impl", 00:36:59.877 "params": { 00:36:59.877 "impl_name": "posix" 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "sock_impl_set_options", 00:36:59.877 "params": { 00:36:59.877 "impl_name": "ssl", 00:36:59.877 "recv_buf_size": 4096, 00:36:59.877 "send_buf_size": 4096, 00:36:59.877 "enable_recv_pipe": true, 00:36:59.877 "enable_quickack": false, 00:36:59.877 "enable_placement_id": 0, 00:36:59.877 "enable_zerocopy_send_server": true, 00:36:59.877 "enable_zerocopy_send_client": false, 00:36:59.877 "zerocopy_threshold": 0, 00:36:59.877 "tls_version": 0, 00:36:59.877 "enable_ktls": false 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "sock_impl_set_options", 00:36:59.877 "params": { 00:36:59.877 "impl_name": "posix", 00:36:59.877 "recv_buf_size": 2097152, 00:36:59.877 "send_buf_size": 2097152, 00:36:59.877 "enable_recv_pipe": true, 00:36:59.877 "enable_quickack": false, 00:36:59.877 "enable_placement_id": 0, 00:36:59.877 "enable_zerocopy_send_server": true, 00:36:59.877 "enable_zerocopy_send_client": false, 00:36:59.877 "zerocopy_threshold": 0, 00:36:59.877 "tls_version": 0, 00:36:59.877 "enable_ktls": false 00:36:59.877 } 00:36:59.877 } 00:36:59.877 ] 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "subsystem": "vmd", 00:36:59.877 "config": [] 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "subsystem": "accel", 00:36:59.877 "config": [ 00:36:59.877 { 00:36:59.877 "method": "accel_set_options", 00:36:59.877 "params": { 00:36:59.877 "small_cache_size": 128, 00:36:59.877 "large_cache_size": 16, 00:36:59.877 "task_count": 2048, 00:36:59.877 "sequence_count": 2048, 00:36:59.877 "buf_count": 2048 00:36:59.877 } 00:36:59.877 } 00:36:59.877 ] 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "subsystem": "bdev", 00:36:59.877 "config": [ 00:36:59.877 { 00:36:59.877 "method": "bdev_set_options", 00:36:59.877 "params": { 00:36:59.877 "bdev_io_pool_size": 65535, 00:36:59.877 "bdev_io_cache_size": 256, 00:36:59.877 "bdev_auto_examine": true, 00:36:59.877 "iobuf_small_cache_size": 128, 00:36:59.877 "iobuf_large_cache_size": 16 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_raid_set_options", 00:36:59.877 "params": { 00:36:59.877 "process_window_size_kb": 1024 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_iscsi_set_options", 00:36:59.877 "params": { 00:36:59.877 "timeout_sec": 30 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_nvme_set_options", 00:36:59.877 "params": { 00:36:59.877 "action_on_timeout": "none", 00:36:59.877 "timeout_us": 0, 00:36:59.877 "timeout_admin_us": 0, 00:36:59.877 "keep_alive_timeout_ms": 10000, 00:36:59.877 "arbitration_burst": 0, 00:36:59.877 "low_priority_weight": 0, 00:36:59.877 "medium_priority_weight": 0, 00:36:59.877 "high_priority_weight": 0, 00:36:59.877 "nvme_adminq_poll_period_us": 10000, 00:36:59.877 "nvme_ioq_poll_period_us": 0, 00:36:59.877 "io_queue_requests": 512, 00:36:59.877 "delay_cmd_submit": true, 00:36:59.877 "transport_retry_count": 4, 00:36:59.877 "bdev_retry_count": 3, 00:36:59.877 "transport_ack_timeout": 0, 00:36:59.877 "ctrlr_loss_timeout_sec": 0, 00:36:59.877 "reconnect_delay_sec": 0, 00:36:59.877 "fast_io_fail_timeout_sec": 0, 00:36:59.877 "disable_auto_failback": false, 00:36:59.877 "generate_uuids": false, 00:36:59.877 "transport_tos": 0, 00:36:59.877 "nvme_error_stat": false, 00:36:59.877 "rdma_srq_size": 0, 00:36:59.877 "io_path_stat": false, 00:36:59.877 "allow_accel_sequence": false, 00:36:59.877 "rdma_max_cq_size": 0, 00:36:59.877 "rdma_cm_event_timeout_ms": 0, 00:36:59.877 "dhchap_digests": [ 00:36:59.877 "sha256", 00:36:59.877 "sha384", 00:36:59.877 "sha512" 00:36:59.877 ], 00:36:59.877 "dhchap_dhgroups": [ 00:36:59.877 "null", 00:36:59.877 "ffdhe2048", 00:36:59.877 "ffdhe3072", 00:36:59.877 "ffdhe4096", 00:36:59.877 "ffdhe6144", 00:36:59.877 "ffdhe8192" 00:36:59.877 ] 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_nvme_attach_controller", 00:36:59.877 "params": { 00:36:59.877 "name": "nvme0", 00:36:59.877 "trtype": "TCP", 00:36:59.877 "adrfam": "IPv4", 00:36:59.877 "traddr": "127.0.0.1", 00:36:59.877 "trsvcid": "4420", 00:36:59.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.877 "prchk_reftag": false, 00:36:59.877 "prchk_guard": false, 00:36:59.877 "ctrlr_loss_timeout_sec": 0, 00:36:59.877 "reconnect_delay_sec": 0, 00:36:59.877 "fast_io_fail_timeout_sec": 0, 00:36:59.877 "psk": "key0", 00:36:59.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.877 "hdgst": false, 00:36:59.877 "ddgst": false 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_nvme_set_hotplug", 00:36:59.877 "params": { 00:36:59.877 "period_us": 100000, 00:36:59.877 "enable": false 00:36:59.877 } 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "method": "bdev_wait_for_examine" 00:36:59.877 } 00:36:59.877 ] 00:36:59.877 }, 00:36:59.877 { 00:36:59.877 "subsystem": "nbd", 00:36:59.877 "config": [] 00:36:59.877 } 00:36:59.877 ] 00:36:59.877 }' 00:36:59.877 [2024-07-15 16:35:42.760752] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:59.877 [2024-07-15 16:35:42.760865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507443 ] 00:36:59.877 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.877 [2024-07-15 16:35:42.824284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.136 [2024-07-15 16:35:42.915436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.136 [2024-07-15 16:35:43.105313] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:01.082 16:35:43 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:01.082 16:35:43 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:01.082 16:35:43 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.082 16:35:43 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:01.082 16:35:43 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:01.082 16:35:43 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:01.082 16:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.339 16:35:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:01.339 16:35:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:01.339 16:35:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:01.339 16:35:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.339 16:35:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.339 16:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.339 16:35:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:01.597 16:35:44 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:01.597 16:35:44 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:01.597 16:35:44 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:01.597 16:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:01.854 16:35:44 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:01.854 16:35:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:01.854 16:35:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.F8LnaHqGNj /tmp/tmp.vJbONtKdqD 00:37:01.854 16:35:44 keyring_file -- keyring/file.sh@20 -- # killprocess 507443 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 507443 ']' 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@950 -- # kill -0 507443 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 507443 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 507443' 00:37:01.854 killing process with pid 507443 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@965 -- # kill 507443 00:37:01.854 Received shutdown signal, test time was about 1.000000 seconds 00:37:01.854 00:37:01.854 Latency(us) 00:37:01.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.854 =================================================================================================================== 00:37:01.854 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:01.854 16:35:44 keyring_file -- common/autotest_common.sh@970 -- # wait 507443 00:37:02.112 16:35:44 keyring_file -- keyring/file.sh@21 -- # killprocess 506002 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 506002 ']' 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@950 -- # kill -0 506002 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 506002 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 506002' 00:37:02.112 killing process with pid 506002 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@965 -- # kill 506002 00:37:02.112 [2024-07-15 16:35:44.965728] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:02.112 16:35:44 keyring_file -- common/autotest_common.sh@970 -- # wait 506002 00:37:02.685 00:37:02.685 real 0m13.931s 00:37:02.685 user 0m34.721s 00:37:02.685 sys 0m3.309s 00:37:02.685 16:35:45 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:02.685 16:35:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.685 ************************************ 00:37:02.685 END TEST keyring_file 00:37:02.685 ************************************ 00:37:02.685 16:35:45 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:02.685 16:35:45 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:02.685 16:35:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:02.685 16:35:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:02.685 16:35:45 -- common/autotest_common.sh@10 -- # set +x 00:37:02.685 ************************************ 00:37:02.685 START TEST keyring_linux 00:37:02.685 ************************************ 00:37:02.685 16:35:45 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:02.685 * Looking for test storage... 00:37:02.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.686 16:35:45 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.686 16:35:45 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.686 16:35:45 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.686 16:35:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.686 16:35:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.686 16:35:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.686 16:35:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:02.686 16:35:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:02.686 /tmp/:spdk-test:key0 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:02.686 16:35:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:02.686 16:35:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:02.686 /tmp/:spdk-test:key1 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=507823 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:02.686 16:35:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 507823 00:37:02.686 16:35:45 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 507823 ']' 00:37:02.686 16:35:45 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.686 16:35:45 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:02.686 16:35:45 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.686 16:35:45 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:02.687 16:35:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:02.687 [2024-07-15 16:35:45.616342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:02.687 [2024-07-15 16:35:45.616426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507823 ] 00:37:02.687 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.013 [2024-07-15 16:35:45.679314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.013 [2024-07-15 16:35:45.776269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:03.273 [2024-07-15 16:35:46.040198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.273 null0 00:37:03.273 [2024-07-15 16:35:46.072240] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:03.273 [2024-07-15 16:35:46.072772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:03.273 550340222 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:03.273 833322801 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=507902 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:03.273 16:35:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 507902 /var/tmp/bperf.sock 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 507902 ']' 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:03.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:03.273 16:35:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:03.273 [2024-07-15 16:35:46.138900] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:03.273 [2024-07-15 16:35:46.138977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507902 ] 00:37:03.273 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.273 [2024-07-15 16:35:46.203393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.530 [2024-07-15 16:35:46.297495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.530 16:35:46 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:03.530 16:35:46 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:03.530 16:35:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:03.530 16:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:03.787 16:35:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:03.787 16:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:04.044 16:35:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:04.044 16:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:04.301 [2024-07-15 16:35:47.143010] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:04.301 nvme0n1 00:37:04.301 16:35:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:04.301 16:35:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:04.301 16:35:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:04.301 16:35:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:04.301 16:35:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:04.301 16:35:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.558 16:35:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:04.558 16:35:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:04.558 16:35:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:04.558 16:35:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:04.558 16:35:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.558 16:35:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.558 16:35:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@25 -- # sn=550340222 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 550340222 == \5\5\0\3\4\0\2\2\2 ]] 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 550340222 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:04.814 16:35:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.072 Running I/O for 1 seconds... 00:37:06.006 00:37:06.006 Latency(us) 00:37:06.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.006 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:06.006 nvme0n1 : 1.01 7574.64 29.59 0.00 0.00 16760.31 8495.41 26602.76 00:37:06.006 =================================================================================================================== 00:37:06.006 Total : 7574.64 29.59 0.00 0.00 16760.31 8495.41 26602.76 00:37:06.006 0 00:37:06.006 16:35:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:06.006 16:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:06.263 16:35:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:06.263 16:35:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:06.263 16:35:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:06.263 16:35:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:06.263 16:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.263 16:35:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:06.520 16:35:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:06.520 16:35:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:06.520 16:35:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:06.520 16:35:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:06.520 16:35:49 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.520 16:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:06.779 [2024-07-15 16:35:49.608792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:06.779 [2024-07-15 16:35:49.609131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ba730 (107): Transport endpoint is not connected 00:37:06.779 [2024-07-15 16:35:49.610121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ba730 (9): Bad file descriptor 00:37:06.779 [2024-07-15 16:35:49.611120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:06.779 [2024-07-15 16:35:49.611146] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:06.779 [2024-07-15 16:35:49.611171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:06.779 request: 00:37:06.779 { 00:37:06.779 "name": "nvme0", 00:37:06.779 "trtype": "tcp", 00:37:06.779 "traddr": "127.0.0.1", 00:37:06.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.779 "adrfam": "ipv4", 00:37:06.779 "trsvcid": "4420", 00:37:06.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.779 "psk": ":spdk-test:key1", 00:37:06.779 "method": "bdev_nvme_attach_controller", 00:37:06.779 "req_id": 1 00:37:06.779 } 00:37:06.779 Got JSON-RPC error response 00:37:06.779 response: 00:37:06.779 { 00:37:06.779 "code": -5, 00:37:06.779 "message": "Input/output error" 00:37:06.779 } 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@33 -- # sn=550340222 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 550340222 00:37:06.779 1 links removed 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@33 -- # sn=833322801 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 833322801 00:37:06.779 1 links removed 00:37:06.779 16:35:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 507902 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 507902 ']' 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 507902 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 507902 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 507902' 00:37:06.779 killing process with pid 507902 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@965 -- # kill 507902 00:37:06.779 Received shutdown signal, test time was about 1.000000 seconds 00:37:06.779 00:37:06.779 Latency(us) 00:37:06.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.779 =================================================================================================================== 00:37:06.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.779 16:35:49 keyring_linux -- common/autotest_common.sh@970 -- # wait 507902 00:37:07.039 16:35:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 507823 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 507823 ']' 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 507823 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 507823 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 507823' 00:37:07.039 killing process with pid 507823 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@965 -- # kill 507823 00:37:07.039 16:35:49 keyring_linux -- common/autotest_common.sh@970 -- # wait 507823 00:37:07.607 00:37:07.607 real 0m4.914s 00:37:07.607 user 0m9.261s 00:37:07.607 sys 0m1.677s 00:37:07.607 16:35:50 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:07.607 16:35:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:07.607 ************************************ 00:37:07.607 END TEST keyring_linux 00:37:07.607 ************************************ 00:37:07.607 16:35:50 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:07.607 16:35:50 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:07.607 16:35:50 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:07.607 16:35:50 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:07.607 16:35:50 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:07.607 16:35:50 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:07.607 16:35:50 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:07.607 16:35:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:07.607 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:37:07.607 16:35:50 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:07.607 16:35:50 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:07.607 16:35:50 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:07.607 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:37:09.508 INFO: APP EXITING 00:37:09.508 INFO: killing all VMs 00:37:09.508 INFO: killing vhost app 00:37:09.508 INFO: EXIT DONE 00:37:10.443 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:37:10.443 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:10.443 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:10.443 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:10.443 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:10.443 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:10.443 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:10.443 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:10.443 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:10.443 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:10.443 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:10.443 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:10.443 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:10.443 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:10.443 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:10.443 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:10.443 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:11.814 Cleaning 00:37:11.814 Removing: /var/run/dpdk/spdk0/config 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:11.814 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:11.815 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:11.815 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:11.815 Removing: /var/run/dpdk/spdk1/config 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:11.815 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:11.815 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:11.815 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:11.815 Removing: /var/run/dpdk/spdk2/config 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:11.815 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:11.815 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:11.815 Removing: /var/run/dpdk/spdk3/config 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:11.815 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:11.815 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:11.815 Removing: /var/run/dpdk/spdk4/config 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:11.815 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:11.815 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:11.815 Removing: /dev/shm/bdev_svc_trace.1 00:37:11.815 Removing: /dev/shm/nvmf_trace.0 00:37:11.815 Removing: /dev/shm/spdk_tgt_trace.pid187402 00:37:11.815 Removing: /var/run/dpdk/spdk0 00:37:11.815 Removing: /var/run/dpdk/spdk1 00:37:11.815 Removing: /var/run/dpdk/spdk2 00:37:11.815 Removing: /var/run/dpdk/spdk3 00:37:11.815 Removing: /var/run/dpdk/spdk4 00:37:11.815 Removing: /var/run/dpdk/spdk_pid185840 00:37:11.815 Removing: /var/run/dpdk/spdk_pid186583 00:37:11.815 Removing: /var/run/dpdk/spdk_pid187402 00:37:11.815 Removing: /var/run/dpdk/spdk_pid187833 00:37:11.815 Removing: /var/run/dpdk/spdk_pid188520 00:37:11.815 Removing: /var/run/dpdk/spdk_pid188660 00:37:11.815 Removing: /var/run/dpdk/spdk_pid189378 00:37:11.815 Removing: /var/run/dpdk/spdk_pid189387 00:37:11.815 Removing: /var/run/dpdk/spdk_pid189628 00:37:11.815 Removing: /var/run/dpdk/spdk_pid190820 00:37:11.815 Removing: /var/run/dpdk/spdk_pid191867 00:37:11.815 Removing: /var/run/dpdk/spdk_pid192050 00:37:11.815 Removing: /var/run/dpdk/spdk_pid192237 00:37:11.815 Removing: /var/run/dpdk/spdk_pid192443 00:37:11.815 Removing: /var/run/dpdk/spdk_pid192631 00:37:11.815 Removing: /var/run/dpdk/spdk_pid192792 00:37:11.815 Removing: /var/run/dpdk/spdk_pid193056 00:37:11.815 Removing: /var/run/dpdk/spdk_pid193252 00:37:11.815 Removing: /var/run/dpdk/spdk_pid193699 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196054 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196224 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196503 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196507 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196818 00:37:11.815 Removing: /var/run/dpdk/spdk_pid196947 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197247 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197380 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197545 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197560 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197725 00:37:11.815 Removing: /var/run/dpdk/spdk_pid197850 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198219 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198373 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198566 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198732 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198885 00:37:11.815 Removing: /var/run/dpdk/spdk_pid198945 00:37:11.815 Removing: /var/run/dpdk/spdk_pid199222 00:37:11.815 Removing: /var/run/dpdk/spdk_pid199380 00:37:11.815 Removing: /var/run/dpdk/spdk_pid199537 00:37:11.815 Removing: /var/run/dpdk/spdk_pid199690 00:37:11.815 Removing: /var/run/dpdk/spdk_pid199967 00:37:11.815 Removing: /var/run/dpdk/spdk_pid200125 00:37:11.815 Removing: /var/run/dpdk/spdk_pid200276 00:37:11.815 Removing: /var/run/dpdk/spdk_pid200550 00:37:11.815 Removing: /var/run/dpdk/spdk_pid200710 00:37:11.815 Removing: /var/run/dpdk/spdk_pid200870 00:37:11.815 Removing: /var/run/dpdk/spdk_pid201027 00:37:11.815 Removing: /var/run/dpdk/spdk_pid201297 00:37:11.815 Removing: /var/run/dpdk/spdk_pid201459 00:37:11.815 Removing: /var/run/dpdk/spdk_pid201612 00:37:11.815 Removing: /var/run/dpdk/spdk_pid201885 00:37:11.815 Removing: /var/run/dpdk/spdk_pid202042 00:37:11.815 Removing: /var/run/dpdk/spdk_pid202252 00:37:11.815 Removing: /var/run/dpdk/spdk_pid202477 00:37:11.815 Removing: /var/run/dpdk/spdk_pid202749 00:37:11.815 Removing: /var/run/dpdk/spdk_pid202911 00:37:12.074 Removing: /var/run/dpdk/spdk_pid203096 00:37:12.074 Removing: /var/run/dpdk/spdk_pid203285 00:37:12.074 Removing: /var/run/dpdk/spdk_pid205876 00:37:12.074 Removing: /var/run/dpdk/spdk_pid258683 00:37:12.074 Removing: /var/run/dpdk/spdk_pid261310 00:37:12.074 Removing: /var/run/dpdk/spdk_pid268748 00:37:12.074 Removing: /var/run/dpdk/spdk_pid271945 00:37:12.074 Removing: /var/run/dpdk/spdk_pid274306 00:37:12.074 Removing: /var/run/dpdk/spdk_pid274710 00:37:12.074 Removing: /var/run/dpdk/spdk_pid281971 00:37:12.074 Removing: /var/run/dpdk/spdk_pid281976 00:37:12.074 Removing: /var/run/dpdk/spdk_pid282563 00:37:12.074 Removing: /var/run/dpdk/spdk_pid283175 00:37:12.074 Removing: /var/run/dpdk/spdk_pid283824 00:37:12.074 Removing: /var/run/dpdk/spdk_pid284223 00:37:12.074 Removing: /var/run/dpdk/spdk_pid284237 00:37:12.074 Removing: /var/run/dpdk/spdk_pid284371 00:37:12.074 Removing: /var/run/dpdk/spdk_pid284505 00:37:12.074 Removing: /var/run/dpdk/spdk_pid284511 00:37:12.074 Removing: /var/run/dpdk/spdk_pid285167 00:37:12.074 Removing: /var/run/dpdk/spdk_pid285833 00:37:12.074 Removing: /var/run/dpdk/spdk_pid286373 00:37:12.074 Removing: /var/run/dpdk/spdk_pid286772 00:37:12.074 Removing: /var/run/dpdk/spdk_pid286895 00:37:12.074 Removing: /var/run/dpdk/spdk_pid287034 00:37:12.074 Removing: /var/run/dpdk/spdk_pid287916 00:37:12.074 Removing: /var/run/dpdk/spdk_pid288631 00:37:12.074 Removing: /var/run/dpdk/spdk_pid294109 00:37:12.074 Removing: /var/run/dpdk/spdk_pid294386 00:37:12.074 Removing: /var/run/dpdk/spdk_pid297413 00:37:12.074 Removing: /var/run/dpdk/spdk_pid301118 00:37:12.074 Removing: /var/run/dpdk/spdk_pid303167 00:37:12.074 Removing: /var/run/dpdk/spdk_pid309449 00:37:12.074 Removing: /var/run/dpdk/spdk_pid314661 00:37:12.074 Removing: /var/run/dpdk/spdk_pid315856 00:37:12.074 Removing: /var/run/dpdk/spdk_pid316555 00:37:12.074 Removing: /var/run/dpdk/spdk_pid326752 00:37:12.074 Removing: /var/run/dpdk/spdk_pid328856 00:37:12.074 Removing: /var/run/dpdk/spdk_pid354132 00:37:12.074 Removing: /var/run/dpdk/spdk_pid357535 00:37:12.074 Removing: /var/run/dpdk/spdk_pid358721 00:37:12.074 Removing: /var/run/dpdk/spdk_pid360032 00:37:12.074 Removing: /var/run/dpdk/spdk_pid360168 00:37:12.074 Removing: /var/run/dpdk/spdk_pid360188 00:37:12.074 Removing: /var/run/dpdk/spdk_pid360318 00:37:12.074 Removing: /var/run/dpdk/spdk_pid360758 00:37:12.074 Removing: /var/run/dpdk/spdk_pid361960 00:37:12.074 Removing: /var/run/dpdk/spdk_pid362678 00:37:12.074 Removing: /var/run/dpdk/spdk_pid363003 00:37:12.074 Removing: /var/run/dpdk/spdk_pid364591 00:37:12.074 Removing: /var/run/dpdk/spdk_pid365017 00:37:12.074 Removing: /var/run/dpdk/spdk_pid365572 00:37:12.074 Removing: /var/run/dpdk/spdk_pid367976 00:37:12.074 Removing: /var/run/dpdk/spdk_pid371251 00:37:12.074 Removing: /var/run/dpdk/spdk_pid374777 00:37:12.074 Removing: /var/run/dpdk/spdk_pid398345 00:37:12.074 Removing: /var/run/dpdk/spdk_pid400975 00:37:12.074 Removing: /var/run/dpdk/spdk_pid404881 00:37:12.074 Removing: /var/run/dpdk/spdk_pid405828 00:37:12.074 Removing: /var/run/dpdk/spdk_pid406925 00:37:12.074 Removing: /var/run/dpdk/spdk_pid409473 00:37:12.074 Removing: /var/run/dpdk/spdk_pid411765 00:37:12.074 Removing: /var/run/dpdk/spdk_pid415963 00:37:12.074 Removing: /var/run/dpdk/spdk_pid415965 00:37:12.074 Removing: /var/run/dpdk/spdk_pid419363 00:37:12.074 Removing: /var/run/dpdk/spdk_pid419619 00:37:12.074 Removing: /var/run/dpdk/spdk_pid419753 00:37:12.074 Removing: /var/run/dpdk/spdk_pid420026 00:37:12.074 Removing: /var/run/dpdk/spdk_pid420040 00:37:12.074 Removing: /var/run/dpdk/spdk_pid421103 00:37:12.074 Removing: /var/run/dpdk/spdk_pid422298 00:37:12.074 Removing: /var/run/dpdk/spdk_pid423573 00:37:12.074 Removing: /var/run/dpdk/spdk_pid424756 00:37:12.074 Removing: /var/run/dpdk/spdk_pid425930 00:37:12.074 Removing: /var/run/dpdk/spdk_pid427106 00:37:12.074 Removing: /var/run/dpdk/spdk_pid430924 00:37:12.074 Removing: /var/run/dpdk/spdk_pid431267 00:37:12.074 Removing: /var/run/dpdk/spdk_pid432545 00:37:12.074 Removing: /var/run/dpdk/spdk_pid433276 00:37:12.074 Removing: /var/run/dpdk/spdk_pid437008 00:37:12.074 Removing: /var/run/dpdk/spdk_pid438975 00:37:12.074 Removing: /var/run/dpdk/spdk_pid442284 00:37:12.074 Removing: /var/run/dpdk/spdk_pid445863 00:37:12.074 Removing: /var/run/dpdk/spdk_pid452844 00:37:12.074 Removing: /var/run/dpdk/spdk_pid457202 00:37:12.074 Removing: /var/run/dpdk/spdk_pid457204 00:37:12.074 Removing: /var/run/dpdk/spdk_pid469717 00:37:12.074 Removing: /var/run/dpdk/spdk_pid470123 00:37:12.074 Removing: /var/run/dpdk/spdk_pid470533 00:37:12.074 Removing: /var/run/dpdk/spdk_pid471062 00:37:12.074 Removing: /var/run/dpdk/spdk_pid471519 00:37:12.074 Removing: /var/run/dpdk/spdk_pid471922 00:37:12.074 Removing: /var/run/dpdk/spdk_pid472449 00:37:12.074 Removing: /var/run/dpdk/spdk_pid472853 00:37:12.074 Removing: /var/run/dpdk/spdk_pid475362 00:37:12.074 Removing: /var/run/dpdk/spdk_pid475506 00:37:12.074 Removing: /var/run/dpdk/spdk_pid479316 00:37:12.074 Removing: /var/run/dpdk/spdk_pid479397 00:37:12.074 Removing: /var/run/dpdk/spdk_pid481696 00:37:12.332 Removing: /var/run/dpdk/spdk_pid486625 00:37:12.332 Removing: /var/run/dpdk/spdk_pid486630 00:37:12.332 Removing: /var/run/dpdk/spdk_pid489544 00:37:12.332 Removing: /var/run/dpdk/spdk_pid490945 00:37:12.332 Removing: /var/run/dpdk/spdk_pid492354 00:37:12.332 Removing: /var/run/dpdk/spdk_pid493094 00:37:12.332 Removing: /var/run/dpdk/spdk_pid494497 00:37:12.332 Removing: /var/run/dpdk/spdk_pid495289 00:37:12.332 Removing: /var/run/dpdk/spdk_pid500554 00:37:12.332 Removing: /var/run/dpdk/spdk_pid500926 00:37:12.332 Removing: /var/run/dpdk/spdk_pid501314 00:37:12.332 Removing: /var/run/dpdk/spdk_pid502877 00:37:12.332 Removing: /var/run/dpdk/spdk_pid503151 00:37:12.332 Removing: /var/run/dpdk/spdk_pid503547 00:37:12.332 Removing: /var/run/dpdk/spdk_pid506002 00:37:12.332 Removing: /var/run/dpdk/spdk_pid506007 00:37:12.332 Removing: /var/run/dpdk/spdk_pid507443 00:37:12.332 Removing: /var/run/dpdk/spdk_pid507823 00:37:12.332 Removing: /var/run/dpdk/spdk_pid507902 00:37:12.332 Clean 00:37:12.332 16:35:55 -- common/autotest_common.sh@1447 -- # return 0 00:37:12.332 16:35:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:12.332 16:35:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:12.332 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:37:12.332 16:35:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:12.332 16:35:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:12.332 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:37:12.332 16:35:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:12.332 16:35:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:12.332 16:35:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:12.333 16:35:55 -- spdk/autotest.sh@391 -- # hash lcov 00:37:12.333 16:35:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:12.333 16:35:55 -- spdk/autotest.sh@393 -- # hostname 00:37:12.333 16:35:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:12.590 geninfo: WARNING: invalid characters removed from testname! 00:37:44.640 16:36:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:44.640 16:36:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.166 16:36:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.442 16:36:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.969 16:36:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:56.240 16:36:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:58.797 16:36:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:58.797 16:36:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.797 16:36:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:58.797 16:36:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.797 16:36:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.797 16:36:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.797 16:36:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.797 16:36:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.797 16:36:41 -- paths/export.sh@5 -- $ export PATH 00:37:58.797 16:36:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.797 16:36:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:58.797 16:36:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:58.797 16:36:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721054201.XXXXXX 00:37:58.797 16:36:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721054201.HDk6m4 00:37:58.797 16:36:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:58.797 16:36:41 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:37:58.797 16:36:41 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:58.797 16:36:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:58.798 16:36:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:58.798 16:36:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:58.798 16:36:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:58.798 16:36:41 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:58.798 16:36:41 -- common/autotest_common.sh@10 -- $ set +x 00:37:58.798 16:36:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:58.798 16:36:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:58.798 16:36:41 -- pm/common@17 -- $ local monitor 00:37:58.798 16:36:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.798 16:36:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.798 16:36:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.798 16:36:41 -- pm/common@21 -- $ date +%s 00:37:58.798 16:36:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:58.798 16:36:41 -- pm/common@21 -- $ date +%s 00:37:58.798 16:36:41 -- pm/common@25 -- $ sleep 1 00:37:58.798 16:36:41 -- pm/common@21 -- $ date +%s 00:37:58.798 16:36:41 -- pm/common@21 -- $ date +%s 00:37:58.798 16:36:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054201 00:37:58.798 16:36:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054201 00:37:58.798 16:36:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054201 00:37:58.798 16:36:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721054201 00:37:58.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054201_collect-vmstat.pm.log 00:37:58.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054201_collect-cpu-load.pm.log 00:37:58.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054201_collect-cpu-temp.pm.log 00:37:58.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721054201_collect-bmc-pm.bmc.pm.log 00:37:59.737 16:36:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:59.737 16:36:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:59.737 16:36:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:59.737 16:36:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:59.737 16:36:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:59.737 16:36:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:59.737 16:36:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:59.737 16:36:42 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:59.737 16:36:42 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:59.737 16:36:42 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:59.737 16:36:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:59.737 16:36:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:59.737 16:36:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:59.737 16:36:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:59.737 16:36:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.737 16:36:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:59.737 16:36:42 -- pm/common@44 -- $ pid=519667 00:37:59.737 16:36:42 -- pm/common@50 -- $ kill -TERM 519667 00:37:59.737 16:36:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.737 16:36:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:59.737 16:36:42 -- pm/common@44 -- $ pid=519669 00:37:59.737 16:36:42 -- pm/common@50 -- $ kill -TERM 519669 00:37:59.737 16:36:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.737 16:36:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:59.737 16:36:42 -- pm/common@44 -- $ pid=519671 00:37:59.737 16:36:42 -- pm/common@50 -- $ kill -TERM 519671 00:37:59.737 16:36:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:59.737 16:36:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:59.737 16:36:42 -- pm/common@44 -- $ pid=519702 00:37:59.737 16:36:42 -- pm/common@50 -- $ sudo -E kill -TERM 519702 00:37:59.737 + [[ -n 80891 ]] 00:37:59.737 + sudo kill 80891 00:37:59.747 [Pipeline] } 00:37:59.770 [Pipeline] // stage 00:37:59.776 [Pipeline] } 00:37:59.797 [Pipeline] // timeout 00:37:59.803 [Pipeline] } 00:37:59.826 [Pipeline] // catchError 00:37:59.832 [Pipeline] } 00:37:59.854 [Pipeline] // wrap 00:37:59.861 [Pipeline] } 00:37:59.880 [Pipeline] // catchError 00:37:59.890 [Pipeline] stage 00:37:59.893 [Pipeline] { (Epilogue) 00:37:59.909 [Pipeline] catchError 00:37:59.911 [Pipeline] { 00:37:59.926 [Pipeline] echo 00:37:59.928 Cleanup processes 00:37:59.935 [Pipeline] sh 00:38:00.222 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.222 519802 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:00.222 519932 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.241 [Pipeline] sh 00:38:00.531 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:00.531 ++ grep -v 'sudo pgrep' 00:38:00.531 ++ awk '{print $1}' 00:38:00.531 + sudo kill -9 519802 00:38:00.543 [Pipeline] sh 00:38:00.826 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:10.826 [Pipeline] sh 00:38:11.114 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:11.114 Artifacts sizes are good 00:38:11.127 [Pipeline] archiveArtifacts 00:38:11.133 Archiving artifacts 00:38:11.379 [Pipeline] sh 00:38:11.660 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:11.676 [Pipeline] cleanWs 00:38:11.686 [WS-CLEANUP] Deleting project workspace... 00:38:11.686 [WS-CLEANUP] Deferred wipeout is used... 00:38:11.692 [WS-CLEANUP] done 00:38:11.694 [Pipeline] } 00:38:11.716 [Pipeline] // catchError 00:38:11.730 [Pipeline] sh 00:38:12.011 + logger -p user.info -t JENKINS-CI 00:38:12.020 [Pipeline] } 00:38:12.037 [Pipeline] // stage 00:38:12.043 [Pipeline] } 00:38:12.081 [Pipeline] // node 00:38:12.087 [Pipeline] End of Pipeline 00:38:12.118 Finished: SUCCESS